May 17 00:28:44.026646 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri May 16 22:44:56 -00 2025 May 17 00:28:44.026659 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:28:44.026667 kernel: BIOS-provided physical RAM map: May 17 00:28:44.026671 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable May 17 00:28:44.026675 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved May 17 00:28:44.026679 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved May 17 00:28:44.026684 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable May 17 00:28:44.026688 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved May 17 00:28:44.026692 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081a73fff] usable May 17 00:28:44.026696 kernel: BIOS-e820: [mem 0x0000000081a74000-0x0000000081a74fff] ACPI NVS May 17 00:28:44.026700 kernel: BIOS-e820: [mem 0x0000000081a75000-0x0000000081a75fff] reserved May 17 00:28:44.026705 kernel: BIOS-e820: [mem 0x0000000081a76000-0x000000008afcdfff] usable May 17 00:28:44.026709 kernel: BIOS-e820: [mem 0x000000008afce000-0x000000008c0b2fff] reserved May 17 00:28:44.026714 kernel: BIOS-e820: [mem 0x000000008c0b3000-0x000000008c23bfff] usable May 17 00:28:44.026719 kernel: BIOS-e820: [mem 0x000000008c23c000-0x000000008c66dfff] ACPI NVS May 17 00:28:44.026724 kernel: BIOS-e820: [mem 0x000000008c66e000-0x000000008eefefff] reserved May 17 00:28:44.026729 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable May 17 00:28:44.026734 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved May 17 00:28:44.026739 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 17 00:28:44.026744 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved May 17 00:28:44.026748 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved May 17 00:28:44.026753 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved May 17 00:28:44.026758 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved May 17 00:28:44.026762 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable May 17 00:28:44.026767 kernel: NX (Execute Disable) protection: active May 17 00:28:44.026772 kernel: APIC: Static calls initialized May 17 00:28:44.026777 kernel: SMBIOS 3.2.1 present. May 17 00:28:44.026781 kernel: DMI: Supermicro X11SCM-F/X11SCM-F, BIOS 2.6 12/03/2024 May 17 00:28:44.026787 kernel: tsc: Detected 3400.000 MHz processor May 17 00:28:44.026792 kernel: tsc: Detected 3399.906 MHz TSC May 17 00:28:44.026796 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:28:44.026802 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:28:44.026806 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 May 17 00:28:44.026811 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs May 17 00:28:44.026816 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:28:44.026821 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 May 17 00:28:44.026826 kernel: Using GB pages for direct mapping May 17 00:28:44.026831 kernel: ACPI: Early table checksum verification disabled May 17 00:28:44.026836 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) May 17 00:28:44.026841 kernel: ACPI: XSDT 0x000000008C54F0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) May 17 00:28:44.026848 kernel: ACPI: FACP 0x000000008C58B670 000114 (v06 01072009 AMI 00010013) May 17 00:28:44.026853 kernel: ACPI: DSDT 0x000000008C54F268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) May 17 00:28:44.026858 kernel: ACPI: FACS 0x000000008C66DF80 000040 May 17 00:28:44.026863 kernel: ACPI: APIC 0x000000008C58B788 00012C (v04 01072009 AMI 00010013) May 17 00:28:44.026869 kernel: ACPI: FPDT 0x000000008C58B8B8 000044 (v01 01072009 AMI 00010013) May 17 00:28:44.026875 kernel: ACPI: FIDT 0x000000008C58B900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) May 17 00:28:44.026880 kernel: ACPI: MCFG 0x000000008C58B9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) May 17 00:28:44.026885 kernel: ACPI: SPMI 0x000000008C58B9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) May 17 00:28:44.026890 kernel: ACPI: SSDT 0x000000008C58BA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) May 17 00:28:44.026895 kernel: ACPI: SSDT 0x000000008C58D548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) May 17 00:28:44.026900 kernel: ACPI: SSDT 0x000000008C590710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) May 17 00:28:44.026906 kernel: ACPI: HPET 0x000000008C592A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) May 17 00:28:44.026911 kernel: ACPI: SSDT 0x000000008C592A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) May 17 00:28:44.026916 kernel: ACPI: SSDT 0x000000008C593A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) May 17 00:28:44.026921 kernel: ACPI: UEFI 0x000000008C594320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) May 17 00:28:44.026926 kernel: ACPI: LPIT 0x000000008C594368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) May 17 00:28:44.026931 kernel: ACPI: SSDT 0x000000008C594400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) May 17 00:28:44.026936 kernel: ACPI: SSDT 0x000000008C596BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) May 17 00:28:44.026942 kernel: ACPI: DBGP 0x000000008C5980C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) May 17 00:28:44.026947 kernel: ACPI: DBG2 0x000000008C598100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) May 17 00:28:44.026953 kernel: ACPI: SSDT 0x000000008C598158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) May 17 00:28:44.026958 kernel: ACPI: DMAR 0x000000008C599CC0 000070 (v01 INTEL EDK2 00000002 01000013) May 17 00:28:44.026963 kernel: ACPI: SSDT 0x000000008C599D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) May 17 00:28:44.026968 kernel: ACPI: TPM2 0x000000008C599E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) May 17 00:28:44.026973 kernel: ACPI: SSDT 0x000000008C599EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) May 17 00:28:44.026978 kernel: ACPI: WSMT 0x000000008C59AC40 000028 (v01 SUPERM 01072009 AMI 00010013) May 17 00:28:44.026983 kernel: ACPI: EINJ 0x000000008C59AC68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) May 17 00:28:44.026988 kernel: ACPI: ERST 0x000000008C59AD98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) May 17 00:28:44.026994 kernel: ACPI: BERT 0x000000008C59AFC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) May 17 00:28:44.026999 kernel: ACPI: HEST 0x000000008C59AFF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) May 17 00:28:44.027004 kernel: ACPI: SSDT 0x000000008C59B278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) May 17 00:28:44.027010 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58b670-0x8c58b783] May 17 00:28:44.027015 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54f268-0x8c58b66b] May 17 00:28:44.027020 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66df80-0x8c66dfbf] May 17 00:28:44.027025 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58b788-0x8c58b8b3] May 17 00:28:44.027030 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58b8b8-0x8c58b8fb] May 17 00:28:44.027035 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58b900-0x8c58b99b] May 17 00:28:44.027041 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58b9a0-0x8c58b9db] May 17 00:28:44.027046 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58b9e0-0x8c58ba20] May 17 00:28:44.027051 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58ba28-0x8c58d543] May 17 00:28:44.027056 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58d548-0x8c59070d] May 17 00:28:44.027061 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c590710-0x8c592a3a] May 17 00:28:44.027066 kernel: ACPI: Reserving HPET table memory at [mem 0x8c592a40-0x8c592a77] May 17 00:28:44.027071 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a78-0x8c593a25] May 17 00:28:44.027076 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593a28-0x8c59431b] May 17 00:28:44.027081 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c594320-0x8c594361] May 17 00:28:44.027087 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c594368-0x8c5943fb] May 17 00:28:44.027092 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c594400-0x8c596bdd] May 17 00:28:44.027097 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c596be0-0x8c5980c1] May 17 00:28:44.027102 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5980c8-0x8c5980fb] May 17 00:28:44.027107 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c598100-0x8c598153] May 17 00:28:44.027112 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598158-0x8c599cbe] May 17 00:28:44.027117 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c599cc0-0x8c599d2f] May 17 00:28:44.027122 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599d30-0x8c599e73] May 17 00:28:44.027127 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c599e78-0x8c599eab] May 17 00:28:44.027133 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599eb0-0x8c59ac3e] May 17 00:28:44.027138 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c59ac40-0x8c59ac67] May 17 00:28:44.027144 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c59ac68-0x8c59ad97] May 17 00:28:44.027149 kernel: ACPI: Reserving ERST table memory at [mem 0x8c59ad98-0x8c59afc7] May 17 00:28:44.027154 kernel: ACPI: Reserving BERT table memory at [mem 0x8c59afc8-0x8c59aff7] May 17 00:28:44.027159 kernel: ACPI: Reserving HEST table memory at [mem 0x8c59aff8-0x8c59b273] May 17 00:28:44.027164 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59b278-0x8c59b3d9] May 17 00:28:44.027169 kernel: No NUMA configuration found May 17 00:28:44.027174 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] May 17 00:28:44.027179 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] May 17 00:28:44.027185 kernel: Zone ranges: May 17 00:28:44.027190 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:28:44.027195 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 17 00:28:44.027200 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] May 17 00:28:44.027206 kernel: Movable zone start for each node May 17 00:28:44.027211 kernel: Early memory node ranges May 17 00:28:44.027216 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] May 17 00:28:44.027221 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] May 17 00:28:44.027226 kernel: node 0: [mem 0x0000000040400000-0x0000000081a73fff] May 17 00:28:44.027232 kernel: node 0: [mem 0x0000000081a76000-0x000000008afcdfff] May 17 00:28:44.027237 kernel: node 0: [mem 0x000000008c0b3000-0x000000008c23bfff] May 17 00:28:44.027244 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] May 17 00:28:44.027250 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] May 17 00:28:44.027259 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] May 17 00:28:44.027265 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:28:44.027270 kernel: On node 0, zone DMA: 103 pages in unavailable ranges May 17 00:28:44.027276 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges May 17 00:28:44.027282 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges May 17 00:28:44.027288 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges May 17 00:28:44.027293 kernel: On node 0, zone DMA32: 11459 pages in unavailable ranges May 17 00:28:44.027299 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges May 17 00:28:44.027304 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges May 17 00:28:44.027310 kernel: ACPI: PM-Timer IO Port: 0x1808 May 17 00:28:44.027315 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) May 17 00:28:44.027321 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) May 17 00:28:44.027326 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) May 17 00:28:44.027332 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) May 17 00:28:44.027338 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) May 17 00:28:44.027343 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) May 17 00:28:44.027348 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) May 17 00:28:44.027354 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) May 17 00:28:44.027359 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) May 17 00:28:44.027365 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) May 17 00:28:44.027370 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) May 17 00:28:44.027375 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) May 17 00:28:44.027382 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) May 17 00:28:44.027387 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) May 17 00:28:44.027392 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) May 17 00:28:44.027398 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) May 17 00:28:44.027403 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 May 17 00:28:44.027409 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 17 00:28:44.027414 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 00:28:44.027420 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:28:44.027425 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 17 00:28:44.027432 kernel: TSC deadline timer available May 17 00:28:44.027437 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs May 17 00:28:44.027443 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices May 17 00:28:44.027448 kernel: Booting paravirtualized kernel on bare hardware May 17 00:28:44.027454 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:28:44.027459 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 May 17 00:28:44.027465 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 May 17 00:28:44.027470 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 May 17 00:28:44.027476 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 May 17 00:28:44.027483 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:28:44.027488 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:28:44.027494 kernel: random: crng init done May 17 00:28:44.027499 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) May 17 00:28:44.027504 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) May 17 00:28:44.027510 kernel: Fallback order for Node 0: 0 May 17 00:28:44.027515 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232416 May 17 00:28:44.027521 kernel: Policy zone: Normal May 17 00:28:44.027527 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:28:44.027533 kernel: software IO TLB: area num 16. May 17 00:28:44.027538 kernel: Memory: 32720308K/33452984K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42872K init, 2320K bss, 732416K reserved, 0K cma-reserved) May 17 00:28:44.027544 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 May 17 00:28:44.027549 kernel: ftrace: allocating 37948 entries in 149 pages May 17 00:28:44.027555 kernel: ftrace: allocated 149 pages with 4 groups May 17 00:28:44.027560 kernel: Dynamic Preempt: voluntary May 17 00:28:44.027566 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 00:28:44.027572 kernel: rcu: RCU event tracing is enabled. May 17 00:28:44.027578 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. May 17 00:28:44.027584 kernel: Trampoline variant of Tasks RCU enabled. May 17 00:28:44.027589 kernel: Rude variant of Tasks RCU enabled. May 17 00:28:44.027595 kernel: Tracing variant of Tasks RCU enabled. May 17 00:28:44.027600 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:28:44.027606 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 May 17 00:28:44.027611 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 May 17 00:28:44.027616 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 00:28:44.027622 kernel: Console: colour dummy device 80x25 May 17 00:28:44.027627 kernel: printk: console [tty0] enabled May 17 00:28:44.027634 kernel: printk: console [ttyS1] enabled May 17 00:28:44.027639 kernel: ACPI: Core revision 20230628 May 17 00:28:44.027645 kernel: hpet: HPET dysfunctional in PC10. Force disabled. May 17 00:28:44.027650 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:28:44.027656 kernel: DMAR: Host address width 39 May 17 00:28:44.027661 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 May 17 00:28:44.027667 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da May 17 00:28:44.027672 kernel: DMAR: RMRR base: 0x0000008cf19000 end: 0x0000008d162fff May 17 00:28:44.027678 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 May 17 00:28:44.027684 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 May 17 00:28:44.027690 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. May 17 00:28:44.027695 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode May 17 00:28:44.027701 kernel: x2apic enabled May 17 00:28:44.027706 kernel: APIC: Switched APIC routing to: cluster x2apic May 17 00:28:44.027712 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns May 17 00:28:44.027717 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) May 17 00:28:44.027723 kernel: CPU0: Thermal monitoring enabled (TM1) May 17 00:28:44.027728 kernel: process: using mwait in idle threads May 17 00:28:44.027734 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 17 00:28:44.027739 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 17 00:28:44.027745 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:28:44.027750 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit May 17 00:28:44.027756 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall May 17 00:28:44.027761 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS May 17 00:28:44.027766 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT May 17 00:28:44.027772 kernel: RETBleed: Mitigation: Enhanced IBRS May 17 00:28:44.027777 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 17 00:28:44.027782 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 17 00:28:44.027788 kernel: TAA: Mitigation: TSX disabled May 17 00:28:44.027794 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers May 17 00:28:44.027799 kernel: SRBDS: Mitigation: Microcode May 17 00:28:44.027805 kernel: GDS: Mitigation: Microcode May 17 00:28:44.027810 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:28:44.027816 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:28:44.027821 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:28:44.027827 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' May 17 00:28:44.027832 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' May 17 00:28:44.027837 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:28:44.027843 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 May 17 00:28:44.027848 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 May 17 00:28:44.027854 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. May 17 00:28:44.027860 kernel: Freeing SMP alternatives memory: 32K May 17 00:28:44.027865 kernel: pid_max: default: 32768 minimum: 301 May 17 00:28:44.027871 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 17 00:28:44.027876 kernel: landlock: Up and running. May 17 00:28:44.027881 kernel: SELinux: Initializing. May 17 00:28:44.027887 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:28:44.027892 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:28:44.027898 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) May 17 00:28:44.027903 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. May 17 00:28:44.027909 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. May 17 00:28:44.027915 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. May 17 00:28:44.027921 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. May 17 00:28:44.027926 kernel: ... version: 4 May 17 00:28:44.027932 kernel: ... bit width: 48 May 17 00:28:44.027937 kernel: ... generic registers: 4 May 17 00:28:44.027942 kernel: ... value mask: 0000ffffffffffff May 17 00:28:44.027948 kernel: ... max period: 00007fffffffffff May 17 00:28:44.027953 kernel: ... fixed-purpose events: 3 May 17 00:28:44.027959 kernel: ... event mask: 000000070000000f May 17 00:28:44.027965 kernel: signal: max sigframe size: 2032 May 17 00:28:44.027971 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 May 17 00:28:44.027976 kernel: rcu: Hierarchical SRCU implementation. May 17 00:28:44.027982 kernel: rcu: Max phase no-delay instances is 400. May 17 00:28:44.027987 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. May 17 00:28:44.027993 kernel: smp: Bringing up secondary CPUs ... May 17 00:28:44.027998 kernel: smpboot: x86: Booting SMP configuration: May 17 00:28:44.028004 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 May 17 00:28:44.028009 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. May 17 00:28:44.028016 kernel: smp: Brought up 1 node, 16 CPUs May 17 00:28:44.028021 kernel: smpboot: Max logical packages: 1 May 17 00:28:44.028027 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) May 17 00:28:44.028032 kernel: devtmpfs: initialized May 17 00:28:44.028038 kernel: x86/mm: Memory block size: 128MB May 17 00:28:44.028043 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81a74000-0x81a74fff] (4096 bytes) May 17 00:28:44.028049 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23c000-0x8c66dfff] (4399104 bytes) May 17 00:28:44.028054 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:28:44.028061 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) May 17 00:28:44.028066 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:28:44.028071 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:28:44.028077 kernel: audit: initializing netlink subsys (disabled) May 17 00:28:44.028082 kernel: audit: type=2000 audit(1747441718.038:1): state=initialized audit_enabled=0 res=1 May 17 00:28:44.028088 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:28:44.028093 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:28:44.028098 kernel: cpuidle: using governor menu May 17 00:28:44.028104 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:28:44.028110 kernel: dca service started, version 1.12.1 May 17 00:28:44.028116 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) May 17 00:28:44.028121 kernel: PCI: Using configuration type 1 for base access May 17 00:28:44.028127 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' May 17 00:28:44.028132 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:28:44.028137 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:28:44.028143 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 17 00:28:44.028162 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:28:44.028167 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 17 00:28:44.028173 kernel: ACPI: Added _OSI(Module Device) May 17 00:28:44.028179 kernel: ACPI: Added _OSI(Processor Device) May 17 00:28:44.028184 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:28:44.028189 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:28:44.028195 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded May 17 00:28:44.028200 kernel: ACPI: Dynamic OEM Table Load: May 17 00:28:44.028205 kernel: ACPI: SSDT 0xFFFF988940E3A800 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) May 17 00:28:44.028211 kernel: ACPI: Dynamic OEM Table Load: May 17 00:28:44.028216 kernel: ACPI: SSDT 0xFFFF988941E0A800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) May 17 00:28:44.028222 kernel: ACPI: Dynamic OEM Table Load: May 17 00:28:44.028228 kernel: ACPI: SSDT 0xFFFF988940DE4000 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) May 17 00:28:44.028233 kernel: ACPI: Dynamic OEM Table Load: May 17 00:28:44.028238 kernel: ACPI: SSDT 0xFFFF988941E08800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) May 17 00:28:44.028245 kernel: ACPI: Dynamic OEM Table Load: May 17 00:28:44.028250 kernel: ACPI: SSDT 0xFFFF988940E51000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) May 17 00:28:44.028256 kernel: ACPI: Dynamic OEM Table Load: May 17 00:28:44.028261 kernel: ACPI: SSDT 0xFFFF988940FA4800 00030A (v02 PmRef ApCst 00003000 INTL 20160527) May 17 00:28:44.028266 kernel: ACPI: _OSC evaluated successfully for all CPUs May 17 00:28:44.028272 kernel: ACPI: Interpreter enabled May 17 00:28:44.028278 kernel: ACPI: PM: (supports S0 S5) May 17 00:28:44.028283 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:28:44.028289 kernel: HEST: Enabling Firmware First mode for corrected errors. May 17 00:28:44.028294 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. May 17 00:28:44.028299 kernel: HEST: Table parsing has been initialized. May 17 00:28:44.028305 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. May 17 00:28:44.028310 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:28:44.028315 kernel: PCI: Ignoring E820 reservations for host bridge windows May 17 00:28:44.028321 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F May 17 00:28:44.028327 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource May 17 00:28:44.028333 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource May 17 00:28:44.028338 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource May 17 00:28:44.028344 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource May 17 00:28:44.028349 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource May 17 00:28:44.028354 kernel: ACPI: \_TZ_.FN00: New power resource May 17 00:28:44.028360 kernel: ACPI: \_TZ_.FN01: New power resource May 17 00:28:44.028365 kernel: ACPI: \_TZ_.FN02: New power resource May 17 00:28:44.028370 kernel: ACPI: \_TZ_.FN03: New power resource May 17 00:28:44.028376 kernel: ACPI: \_TZ_.FN04: New power resource May 17 00:28:44.028382 kernel: ACPI: \PIN_: New power resource May 17 00:28:44.028387 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) May 17 00:28:44.028460 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:28:44.028514 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] May 17 00:28:44.028561 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] May 17 00:28:44.028569 kernel: PCI host bridge to bus 0000:00 May 17 00:28:44.028622 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 17 00:28:44.028665 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 17 00:28:44.028707 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 17 00:28:44.028749 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] May 17 00:28:44.028791 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] May 17 00:28:44.028831 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] May 17 00:28:44.028888 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 May 17 00:28:44.028947 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 May 17 00:28:44.028997 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold May 17 00:28:44.029049 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 May 17 00:28:44.029098 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] May 17 00:28:44.029149 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 May 17 00:28:44.029197 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] May 17 00:28:44.029255 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 May 17 00:28:44.029304 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] May 17 00:28:44.029352 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold May 17 00:28:44.029402 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 May 17 00:28:44.029450 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] May 17 00:28:44.029496 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] May 17 00:28:44.029551 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 May 17 00:28:44.029598 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] May 17 00:28:44.029652 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 May 17 00:28:44.029699 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] May 17 00:28:44.029751 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 May 17 00:28:44.029798 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] May 17 00:28:44.029847 kernel: pci 0000:00:16.0: PME# supported from D3hot May 17 00:28:44.029897 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 May 17 00:28:44.029945 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] May 17 00:28:44.030001 kernel: pci 0000:00:16.1: PME# supported from D3hot May 17 00:28:44.030052 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 May 17 00:28:44.030100 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] May 17 00:28:44.030146 kernel: pci 0000:00:16.4: PME# supported from D3hot May 17 00:28:44.030200 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 May 17 00:28:44.030251 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] May 17 00:28:44.030338 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] May 17 00:28:44.030387 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] May 17 00:28:44.030435 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] May 17 00:28:44.030482 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] May 17 00:28:44.030529 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] May 17 00:28:44.030578 kernel: pci 0000:00:17.0: PME# supported from D3hot May 17 00:28:44.030631 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 May 17 00:28:44.030682 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold May 17 00:28:44.030738 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 May 17 00:28:44.030790 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold May 17 00:28:44.030841 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 May 17 00:28:44.030890 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold May 17 00:28:44.030941 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 May 17 00:28:44.030991 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold May 17 00:28:44.031042 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 May 17 00:28:44.031093 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold May 17 00:28:44.031145 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 May 17 00:28:44.031193 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] May 17 00:28:44.031310 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 May 17 00:28:44.031365 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 May 17 00:28:44.031413 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] May 17 00:28:44.031463 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] May 17 00:28:44.031518 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 May 17 00:28:44.031565 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] May 17 00:28:44.031620 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 May 17 00:28:44.031669 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] May 17 00:28:44.031718 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] May 17 00:28:44.031769 kernel: pci 0000:01:00.0: PME# supported from D3cold May 17 00:28:44.031818 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] May 17 00:28:44.031867 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) May 17 00:28:44.031922 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 May 17 00:28:44.031972 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] May 17 00:28:44.032020 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] May 17 00:28:44.032069 kernel: pci 0000:01:00.1: PME# supported from D3cold May 17 00:28:44.032119 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] May 17 00:28:44.032169 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) May 17 00:28:44.032218 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 17 00:28:44.032269 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] May 17 00:28:44.032347 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] May 17 00:28:44.032395 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] May 17 00:28:44.032449 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect May 17 00:28:44.032498 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 May 17 00:28:44.032550 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] May 17 00:28:44.032598 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] May 17 00:28:44.032646 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] May 17 00:28:44.032695 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold May 17 00:28:44.032744 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] May 17 00:28:44.032792 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] May 17 00:28:44.032839 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] May 17 00:28:44.032896 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect May 17 00:28:44.032945 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 May 17 00:28:44.032995 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] May 17 00:28:44.033044 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] May 17 00:28:44.033093 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] May 17 00:28:44.033142 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold May 17 00:28:44.033191 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] May 17 00:28:44.033239 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] May 17 00:28:44.033336 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] May 17 00:28:44.033385 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] May 17 00:28:44.033438 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 May 17 00:28:44.033488 kernel: pci 0000:06:00.0: enabling Extended Tags May 17 00:28:44.033537 kernel: pci 0000:06:00.0: supports D1 D2 May 17 00:28:44.033587 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 17 00:28:44.033634 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] May 17 00:28:44.033685 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] May 17 00:28:44.033732 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] May 17 00:28:44.033788 kernel: pci_bus 0000:07: extended config space not accessible May 17 00:28:44.033844 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 May 17 00:28:44.033897 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] May 17 00:28:44.033948 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] May 17 00:28:44.033998 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] May 17 00:28:44.034051 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 17 00:28:44.034102 kernel: pci 0000:07:00.0: supports D1 D2 May 17 00:28:44.034151 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 17 00:28:44.034201 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] May 17 00:28:44.034252 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] May 17 00:28:44.034302 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] May 17 00:28:44.034310 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 May 17 00:28:44.034316 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 May 17 00:28:44.034324 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 May 17 00:28:44.034330 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 May 17 00:28:44.034335 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 May 17 00:28:44.034341 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 May 17 00:28:44.034347 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 May 17 00:28:44.034352 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 May 17 00:28:44.034358 kernel: iommu: Default domain type: Translated May 17 00:28:44.034363 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:28:44.034369 kernel: PCI: Using ACPI for IRQ routing May 17 00:28:44.034376 kernel: PCI: pci_cache_line_size set to 64 bytes May 17 00:28:44.034381 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] May 17 00:28:44.034387 kernel: e820: reserve RAM buffer [mem 0x81a74000-0x83ffffff] May 17 00:28:44.034392 kernel: e820: reserve RAM buffer [mem 0x8afce000-0x8bffffff] May 17 00:28:44.034398 kernel: e820: reserve RAM buffer [mem 0x8c23c000-0x8fffffff] May 17 00:28:44.034403 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] May 17 00:28:44.034409 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] May 17 00:28:44.034458 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device May 17 00:28:44.034509 kernel: pci 0000:07:00.0: vgaarb: bridge control possible May 17 00:28:44.034563 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 17 00:28:44.034572 kernel: vgaarb: loaded May 17 00:28:44.034578 kernel: clocksource: Switched to clocksource tsc-early May 17 00:28:44.034583 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:28:44.034589 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:28:44.034595 kernel: pnp: PnP ACPI init May 17 00:28:44.034645 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved May 17 00:28:44.034694 kernel: pnp 00:02: [dma 0 disabled] May 17 00:28:44.034743 kernel: pnp 00:03: [dma 0 disabled] May 17 00:28:44.034794 kernel: system 00:04: [io 0x0680-0x069f] has been reserved May 17 00:28:44.034838 kernel: system 00:04: [io 0x164e-0x164f] has been reserved May 17 00:28:44.034885 kernel: system 00:05: [io 0x1854-0x1857] has been reserved May 17 00:28:44.034930 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved May 17 00:28:44.034975 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved May 17 00:28:44.035021 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved May 17 00:28:44.035065 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved May 17 00:28:44.035110 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved May 17 00:28:44.035155 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved May 17 00:28:44.035198 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved May 17 00:28:44.035245 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved May 17 00:28:44.035293 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved May 17 00:28:44.035339 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved May 17 00:28:44.035384 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved May 17 00:28:44.035427 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved May 17 00:28:44.035470 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved May 17 00:28:44.035513 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved May 17 00:28:44.035556 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved May 17 00:28:44.035603 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved May 17 00:28:44.035613 kernel: pnp: PnP ACPI: found 10 devices May 17 00:28:44.035619 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:28:44.035626 kernel: NET: Registered PF_INET protocol family May 17 00:28:44.035632 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:28:44.035637 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) May 17 00:28:44.035643 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:28:44.035649 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:28:44.035655 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 17 00:28:44.035661 kernel: TCP: Hash tables configured (established 262144 bind 65536) May 17 00:28:44.035667 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) May 17 00:28:44.035673 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) May 17 00:28:44.035679 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:28:44.035684 kernel: NET: Registered PF_XDP protocol family May 17 00:28:44.035732 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] May 17 00:28:44.035780 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] May 17 00:28:44.035829 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] May 17 00:28:44.035880 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] May 17 00:28:44.035931 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] May 17 00:28:44.035981 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] May 17 00:28:44.036028 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] May 17 00:28:44.036077 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 17 00:28:44.036124 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] May 17 00:28:44.036172 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] May 17 00:28:44.036219 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] May 17 00:28:44.036272 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] May 17 00:28:44.036319 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] May 17 00:28:44.036366 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] May 17 00:28:44.036413 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] May 17 00:28:44.036461 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] May 17 00:28:44.036510 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] May 17 00:28:44.036558 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] May 17 00:28:44.036607 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] May 17 00:28:44.036655 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] May 17 00:28:44.036705 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] May 17 00:28:44.036751 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] May 17 00:28:44.036799 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] May 17 00:28:44.036846 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] May 17 00:28:44.036891 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc May 17 00:28:44.036936 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 17 00:28:44.036978 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 17 00:28:44.037020 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 17 00:28:44.037062 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] May 17 00:28:44.037103 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] May 17 00:28:44.037152 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] May 17 00:28:44.037197 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] May 17 00:28:44.037257 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] May 17 00:28:44.037301 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] May 17 00:28:44.037351 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] May 17 00:28:44.037395 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] May 17 00:28:44.037444 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] May 17 00:28:44.037489 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] May 17 00:28:44.037537 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] May 17 00:28:44.037583 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] May 17 00:28:44.037591 kernel: PCI: CLS 64 bytes, default 64 May 17 00:28:44.037597 kernel: DMAR: No ATSR found May 17 00:28:44.037603 kernel: DMAR: No SATC found May 17 00:28:44.037609 kernel: DMAR: dmar0: Using Queued invalidation May 17 00:28:44.037657 kernel: pci 0000:00:00.0: Adding to iommu group 0 May 17 00:28:44.037706 kernel: pci 0000:00:01.0: Adding to iommu group 1 May 17 00:28:44.037753 kernel: pci 0000:00:08.0: Adding to iommu group 2 May 17 00:28:44.037803 kernel: pci 0000:00:12.0: Adding to iommu group 3 May 17 00:28:44.037849 kernel: pci 0000:00:14.0: Adding to iommu group 4 May 17 00:28:44.037897 kernel: pci 0000:00:14.2: Adding to iommu group 4 May 17 00:28:44.037943 kernel: pci 0000:00:15.0: Adding to iommu group 5 May 17 00:28:44.037990 kernel: pci 0000:00:15.1: Adding to iommu group 5 May 17 00:28:44.038037 kernel: pci 0000:00:16.0: Adding to iommu group 6 May 17 00:28:44.038084 kernel: pci 0000:00:16.1: Adding to iommu group 6 May 17 00:28:44.038131 kernel: pci 0000:00:16.4: Adding to iommu group 6 May 17 00:28:44.038181 kernel: pci 0000:00:17.0: Adding to iommu group 7 May 17 00:28:44.038229 kernel: pci 0000:00:1b.0: Adding to iommu group 8 May 17 00:28:44.038279 kernel: pci 0000:00:1b.4: Adding to iommu group 9 May 17 00:28:44.038327 kernel: pci 0000:00:1b.5: Adding to iommu group 10 May 17 00:28:44.038374 kernel: pci 0000:00:1c.0: Adding to iommu group 11 May 17 00:28:44.038422 kernel: pci 0000:00:1c.3: Adding to iommu group 12 May 17 00:28:44.038468 kernel: pci 0000:00:1e.0: Adding to iommu group 13 May 17 00:28:44.038516 kernel: pci 0000:00:1f.0: Adding to iommu group 14 May 17 00:28:44.038565 kernel: pci 0000:00:1f.4: Adding to iommu group 14 May 17 00:28:44.038613 kernel: pci 0000:00:1f.5: Adding to iommu group 14 May 17 00:28:44.038661 kernel: pci 0000:01:00.0: Adding to iommu group 1 May 17 00:28:44.038711 kernel: pci 0000:01:00.1: Adding to iommu group 1 May 17 00:28:44.038759 kernel: pci 0000:03:00.0: Adding to iommu group 15 May 17 00:28:44.038808 kernel: pci 0000:04:00.0: Adding to iommu group 16 May 17 00:28:44.038857 kernel: pci 0000:06:00.0: Adding to iommu group 17 May 17 00:28:44.038906 kernel: pci 0000:07:00.0: Adding to iommu group 17 May 17 00:28:44.038916 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O May 17 00:28:44.038923 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 17 00:28:44.038928 kernel: software IO TLB: mapped [mem 0x0000000086fce000-0x000000008afce000] (64MB) May 17 00:28:44.038934 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer May 17 00:28:44.038940 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules May 17 00:28:44.038946 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules May 17 00:28:44.038951 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules May 17 00:28:44.039003 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) May 17 00:28:44.039014 kernel: Initialise system trusted keyrings May 17 00:28:44.039020 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 May 17 00:28:44.039025 kernel: Key type asymmetric registered May 17 00:28:44.039031 kernel: Asymmetric key parser 'x509' registered May 17 00:28:44.039036 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 17 00:28:44.039042 kernel: io scheduler mq-deadline registered May 17 00:28:44.039048 kernel: io scheduler kyber registered May 17 00:28:44.039054 kernel: io scheduler bfq registered May 17 00:28:44.039100 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 May 17 00:28:44.039150 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 May 17 00:28:44.039197 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 May 17 00:28:44.039248 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 May 17 00:28:44.039334 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 May 17 00:28:44.039382 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 May 17 00:28:44.039433 kernel: thermal LNXTHERM:00: registered as thermal_zone0 May 17 00:28:44.039442 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) May 17 00:28:44.039448 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. May 17 00:28:44.039456 kernel: pstore: Using crash dump compression: deflate May 17 00:28:44.039462 kernel: pstore: Registered erst as persistent store backend May 17 00:28:44.039468 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:28:44.039473 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:28:44.039479 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:28:44.039485 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A May 17 00:28:44.039490 kernel: hpet_acpi_add: no address or irqs in _CRS May 17 00:28:44.039542 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) May 17 00:28:44.039552 kernel: i8042: PNP: No PS/2 controller found. May 17 00:28:44.039596 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 May 17 00:28:44.039639 kernel: rtc_cmos rtc_cmos: registered as rtc0 May 17 00:28:44.039684 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-05-17T00:28:42 UTC (1747441722) May 17 00:28:44.039728 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram May 17 00:28:44.039736 kernel: intel_pstate: Intel P-state driver initializing May 17 00:28:44.039742 kernel: intel_pstate: Disabling energy efficiency optimization May 17 00:28:44.039748 kernel: intel_pstate: HWP enabled May 17 00:28:44.039755 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 May 17 00:28:44.039761 kernel: vesafb: scrolling: redraw May 17 00:28:44.039766 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 May 17 00:28:44.039772 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x00000000a96ba51f, using 768k, total 768k May 17 00:28:44.039778 kernel: Console: switching to colour frame buffer device 128x48 May 17 00:28:44.039784 kernel: fb0: VESA VGA frame buffer device May 17 00:28:44.039789 kernel: NET: Registered PF_INET6 protocol family May 17 00:28:44.039795 kernel: Segment Routing with IPv6 May 17 00:28:44.039801 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:28:44.039808 kernel: NET: Registered PF_PACKET protocol family May 17 00:28:44.039814 kernel: Key type dns_resolver registered May 17 00:28:44.039819 kernel: microcode: Current revision: 0x00000102 May 17 00:28:44.039825 kernel: microcode: Microcode Update Driver: v2.2. May 17 00:28:44.039830 kernel: IPI shorthand broadcast: enabled May 17 00:28:44.039836 kernel: sched_clock: Marking stable (2482072145, 1378392188)->(4396076078, -535611745) May 17 00:28:44.039842 kernel: registered taskstats version 1 May 17 00:28:44.039847 kernel: Loading compiled-in X.509 certificates May 17 00:28:44.039853 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 85b8d1234ceca483cb3defc2030d93f7792663c9' May 17 00:28:44.039860 kernel: Key type .fscrypt registered May 17 00:28:44.039865 kernel: Key type fscrypt-provisioning registered May 17 00:28:44.039871 kernel: ima: Allocated hash algorithm: sha1 May 17 00:28:44.039877 kernel: ima: No architecture policies found May 17 00:28:44.039882 kernel: clk: Disabling unused clocks May 17 00:28:44.039888 kernel: Freeing unused kernel image (initmem) memory: 42872K May 17 00:28:44.039894 kernel: Write protecting the kernel read-only data: 36864k May 17 00:28:44.039899 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 17 00:28:44.039905 kernel: Run /init as init process May 17 00:28:44.039912 kernel: with arguments: May 17 00:28:44.039918 kernel: /init May 17 00:28:44.039923 kernel: with environment: May 17 00:28:44.039929 kernel: HOME=/ May 17 00:28:44.039934 kernel: TERM=linux May 17 00:28:44.039940 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:28:44.039947 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:28:44.039954 systemd[1]: Detected architecture x86-64. May 17 00:28:44.039961 systemd[1]: Running in initrd. May 17 00:28:44.039967 systemd[1]: No hostname configured, using default hostname. May 17 00:28:44.039973 systemd[1]: Hostname set to . May 17 00:28:44.039978 systemd[1]: Initializing machine ID from random generator. May 17 00:28:44.039984 systemd[1]: Queued start job for default target initrd.target. May 17 00:28:44.039990 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:28:44.039996 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:28:44.040002 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 00:28:44.040009 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:28:44.040015 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 00:28:44.040021 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 00:28:44.040028 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 00:28:44.040034 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz May 17 00:28:44.040040 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns May 17 00:28:44.040046 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 00:28:44.040052 kernel: clocksource: Switched to clocksource tsc May 17 00:28:44.040058 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:28:44.040064 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:28:44.040070 systemd[1]: Reached target paths.target - Path Units. May 17 00:28:44.040076 systemd[1]: Reached target slices.target - Slice Units. May 17 00:28:44.040083 systemd[1]: Reached target swap.target - Swaps. May 17 00:28:44.040089 systemd[1]: Reached target timers.target - Timer Units. May 17 00:28:44.040094 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:28:44.040102 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:28:44.040108 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:28:44.040114 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:28:44.040120 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:28:44.040126 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:28:44.040132 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:28:44.040137 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:28:44.040143 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 00:28:44.040149 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:28:44.040156 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 00:28:44.040162 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:28:44.040168 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:28:44.040184 systemd-journald[266]: Collecting audit messages is disabled. May 17 00:28:44.040199 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:28:44.040206 systemd-journald[266]: Journal started May 17 00:28:44.040219 systemd-journald[266]: Runtime Journal (/run/log/journal/d6c2772e212947a58dc659cf2cfc6464) is 8.0M, max 639.9M, 631.9M free. May 17 00:28:44.063237 systemd-modules-load[268]: Inserted module 'overlay' May 17 00:28:44.085260 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:28:44.113916 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 00:28:44.183486 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:28:44.183499 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:28:44.183512 kernel: Bridge firewalling registered May 17 00:28:44.169433 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:28:44.173382 systemd-modules-load[268]: Inserted module 'br_netfilter' May 17 00:28:44.195582 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:28:44.216558 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:28:44.225581 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:28:44.261639 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:28:44.274237 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:28:44.302801 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:28:44.311052 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:28:44.314508 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:28:44.315508 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:28:44.316056 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:28:44.321410 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:28:44.322249 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:28:44.322677 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:28:44.326535 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:28:44.339731 systemd-resolved[303]: Positive Trust Anchors: May 17 00:28:44.339738 systemd-resolved[303]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:28:44.339765 systemd-resolved[303]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:28:44.341488 systemd-resolved[303]: Defaulting to hostname 'linux'. May 17 00:28:44.368490 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:28:44.385702 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:28:44.424800 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 00:28:44.534671 dracut-cmdline[307]: dracut-dracut-053 May 17 00:28:44.542459 dracut-cmdline[307]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:28:44.739287 kernel: SCSI subsystem initialized May 17 00:28:44.762286 kernel: Loading iSCSI transport class v2.0-870. May 17 00:28:44.785281 kernel: iscsi: registered transport (tcp) May 17 00:28:44.816820 kernel: iscsi: registered transport (qla4xxx) May 17 00:28:44.816837 kernel: QLogic iSCSI HBA Driver May 17 00:28:44.850076 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 00:28:44.874515 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 00:28:44.932015 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:28:44.932034 kernel: device-mapper: uevent: version 1.0.3 May 17 00:28:44.951700 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 17 00:28:45.009328 kernel: raid6: avx2x4 gen() 53560 MB/s May 17 00:28:45.041330 kernel: raid6: avx2x2 gen() 54060 MB/s May 17 00:28:45.077679 kernel: raid6: avx2x1 gen() 45193 MB/s May 17 00:28:45.077696 kernel: raid6: using algorithm avx2x2 gen() 54060 MB/s May 17 00:28:45.124752 kernel: raid6: .... xor() 30878 MB/s, rmw enabled May 17 00:28:45.124772 kernel: raid6: using avx2x2 recovery algorithm May 17 00:28:45.165287 kernel: xor: automatically using best checksumming function avx May 17 00:28:45.279279 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 00:28:45.284939 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 00:28:45.315586 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:28:45.322721 systemd-udevd[491]: Using default interface naming scheme 'v255'. May 17 00:28:45.325138 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:28:45.361496 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 00:28:45.403420 dracut-pre-trigger[503]: rd.md=0: removing MD RAID activation May 17 00:28:45.420577 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:28:45.447580 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:28:45.507586 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:28:45.564749 kernel: pps_core: LinuxPPS API ver. 1 registered May 17 00:28:45.564771 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 17 00:28:45.564785 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:28:45.571249 kernel: libata version 3.00 loaded. May 17 00:28:45.580252 kernel: ACPI: bus type USB registered May 17 00:28:45.580290 kernel: PTP clock support registered May 17 00:28:45.580299 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:28:45.582601 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 00:28:45.743678 kernel: usbcore: registered new interface driver usbfs May 17 00:28:45.743695 kernel: AES CTR mode by8 optimization enabled May 17 00:28:45.743705 kernel: ahci 0000:00:17.0: version 3.0 May 17 00:28:45.743831 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode May 17 00:28:45.743934 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst May 17 00:28:45.744033 kernel: usbcore: registered new interface driver hub May 17 00:28:45.744051 kernel: scsi host0: ahci May 17 00:28:45.744152 kernel: usbcore: registered new device driver usb May 17 00:28:45.744167 kernel: scsi host1: ahci May 17 00:28:45.744264 kernel: scsi host2: ahci May 17 00:28:45.758249 kernel: scsi host3: ahci May 17 00:28:45.771797 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 00:28:45.809352 kernel: scsi host4: ahci May 17 00:28:45.809448 kernel: scsi host5: ahci May 17 00:28:45.809513 kernel: scsi host6: ahci May 17 00:28:45.787481 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:28:45.944638 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 May 17 00:28:45.944656 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 May 17 00:28:45.944664 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 May 17 00:28:45.944672 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 May 17 00:28:45.944679 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 May 17 00:28:45.944686 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 May 17 00:28:45.944693 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 May 17 00:28:45.944700 kernel: igb: Intel(R) Gigabit Ethernet Network Driver May 17 00:28:45.944707 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. May 17 00:28:45.924319 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:28:45.963475 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:28:46.001818 kernel: igb 0000:03:00.0: added PHC on eth0 May 17 00:28:46.001915 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection May 17 00:28:46.001885 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:28:46.053112 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:e6:78 May 17 00:28:46.053216 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 May 17 00:28:46.053327 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) May 17 00:28:46.002007 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:28:46.100770 kernel: mlx5_core 0000:01:00.0: firmware version: 14.31.1014 May 17 00:28:46.100859 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) May 17 00:28:46.100927 kernel: igb 0000:04:00.0: added PHC on eth1 May 17 00:28:46.100995 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection May 17 00:28:46.107651 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:28:46.171352 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:e6:79 May 17 00:28:46.171512 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 May 17 00:28:46.171580 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) May 17 00:28:46.167491 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 00:28:46.171396 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:28:46.171560 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:28:46.275236 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 17 00:28:46.275254 kernel: ata7: SATA link down (SStatus 0 SControl 300) May 17 00:28:46.275262 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 17 00:28:46.275270 kernel: ata3: SATA link down (SStatus 0 SControl 300) May 17 00:28:46.275276 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) May 17 00:28:46.192294 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:28:46.311932 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) May 17 00:28:46.311944 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 17 00:28:46.311952 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 May 17 00:28:46.198505 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:28:46.386580 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 May 17 00:28:46.386592 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA May 17 00:28:46.386603 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) May 17 00:28:46.386693 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA May 17 00:28:46.386702 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged May 17 00:28:46.386769 kernel: ata1.00: Features: NCQ-prio May 17 00:28:46.299600 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:28:46.490193 kernel: ata2.00: Features: NCQ-prio May 17 00:28:46.490210 kernel: ata1.00: configured for UDMA/133 May 17 00:28:46.490221 kernel: ata2.00: configured for UDMA/133 May 17 00:28:46.490231 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 May 17 00:28:46.490315 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 May 17 00:28:46.490380 kernel: igb 0000:04:00.0 eno2: renamed from eth1 May 17 00:28:46.299651 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:28:46.344456 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 00:28:46.543097 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller May 17 00:28:46.543200 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 May 17 00:28:46.543275 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 May 17 00:28:46.543340 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller May 17 00:28:46.497043 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:28:46.595975 kernel: igb 0000:03:00.0 eno1: renamed from eth0 May 17 00:28:46.596059 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 May 17 00:28:46.596129 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed May 17 00:28:46.596193 kernel: hub 1-0:1.0: USB hub found May 17 00:28:46.615052 kernel: hub 1-0:1.0: 16 ports detected May 17 00:28:46.623458 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:28:46.637500 kernel: hub 2-0:1.0: USB hub found May 17 00:28:46.637592 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 17 00:28:46.637671 kernel: hub 2-0:1.0: 10 ports detected May 17 00:28:46.642296 kernel: mlx5_core 0000:01:00.1: firmware version: 14.31.1014 May 17 00:28:46.681272 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) May 17 00:28:46.712757 kernel: ata1.00: Enabling discard_zeroes_data May 17 00:28:46.712780 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) May 17 00:28:46.712880 kernel: ata2.00: Enabling discard_zeroes_data May 17 00:28:46.724965 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks May 17 00:28:46.725060 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) May 17 00:28:46.737688 kernel: sd 0:0:0:0: [sda] Write Protect is off May 17 00:28:46.737771 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 May 17 00:28:46.737844 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 17 00:28:46.737916 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks May 17 00:28:46.751469 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes May 17 00:28:46.751556 kernel: sd 1:0:0:0: [sdb] Write Protect is off May 17 00:28:46.752394 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:28:47.124623 kernel: ata1.00: Enabling discard_zeroes_data May 17 00:28:47.124640 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 May 17 00:28:47.124743 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:28:47.124752 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 17 00:28:47.124817 kernel: GPT:9289727 != 937703087 May 17 00:28:47.124825 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:28:47.124831 kernel: GPT:9289727 != 937703087 May 17 00:28:47.124840 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:28:47.124847 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:28:47.124854 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes May 17 00:28:47.124915 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 17 00:28:47.124975 kernel: ata2.00: Enabling discard_zeroes_data May 17 00:28:47.124983 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd May 17 00:28:47.125087 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk May 17 00:28:47.125150 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) May 17 00:28:47.125221 kernel: hub 1-14:1.0: USB hub found May 17 00:28:47.125338 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged May 17 00:28:47.125407 kernel: hub 1-14:1.0: 4 ports detected May 17 00:28:47.125474 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (560) May 17 00:28:47.125482 kernel: BTRFS: device fsid 7f88d479-6686-439c-8052-b96f0a9d77bc devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (576) May 17 00:28:47.135794 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. May 17 00:28:47.179608 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:28:47.194574 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. May 17 00:28:47.251196 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. May 17 00:28:47.271641 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. May 17 00:28:47.345382 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 17 00:28:47.345475 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 May 17 00:28:47.345541 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 May 17 00:28:47.345604 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd May 17 00:28:47.294340 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. May 17 00:28:47.405363 kernel: ata1.00: Enabling discard_zeroes_data May 17 00:28:47.405377 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:28:47.351336 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 00:28:47.425041 kernel: ata1.00: Enabling discard_zeroes_data May 17 00:28:47.425098 disk-uuid[725]: Primary Header is updated. May 17 00:28:47.425098 disk-uuid[725]: Secondary Entries is updated. May 17 00:28:47.425098 disk-uuid[725]: Secondary Header is updated. May 17 00:28:47.473332 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:28:47.473344 kernel: ata1.00: Enabling discard_zeroes_data May 17 00:28:47.473351 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:28:47.503272 kernel: hid: raw HID events driver (C) Jiri Kosina May 17 00:28:47.526318 kernel: usbcore: registered new interface driver usbhid May 17 00:28:47.526336 kernel: usbhid: USB HID core driver May 17 00:28:47.572251 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 May 17 00:28:47.669680 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 May 17 00:28:47.669809 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 May 17 00:28:47.704643 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 May 17 00:28:48.472814 kernel: ata1.00: Enabling discard_zeroes_data May 17 00:28:48.493949 disk-uuid[727]: The operation has completed successfully. May 17 00:28:48.503424 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:28:48.525846 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:28:48.525893 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 00:28:48.557533 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 00:28:48.596344 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 17 00:28:48.596406 sh[745]: Success May 17 00:28:48.626026 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 00:28:48.650473 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 00:28:48.658657 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 00:28:48.710770 kernel: BTRFS info (device dm-0): first mount of filesystem 7f88d479-6686-439c-8052-b96f0a9d77bc May 17 00:28:48.710789 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 17 00:28:48.732623 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 17 00:28:48.752091 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 17 00:28:48.770511 kernel: BTRFS info (device dm-0): using free space tree May 17 00:28:48.809248 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 17 00:28:48.811614 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 00:28:48.820807 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 00:28:48.832571 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 00:28:48.856797 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 00:28:48.962229 kernel: BTRFS info (device sda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:28:48.962246 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:28:48.962280 kernel: BTRFS info (device sda6): using free space tree May 17 00:28:48.962302 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:28:48.962309 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:28:48.986248 kernel: BTRFS info (device sda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:28:48.988075 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 00:28:49.013908 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 00:28:49.080832 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:28:49.109146 ignition[806]: Ignition 2.19.0 May 17 00:28:49.109151 ignition[806]: Stage: fetch-offline May 17 00:28:49.109444 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:28:49.109172 ignition[806]: no configs at "/usr/lib/ignition/base.d" May 17 00:28:49.111237 unknown[806]: fetched base config from "system" May 17 00:28:49.109177 ignition[806]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:28:49.111241 unknown[806]: fetched user config from "system" May 17 00:28:49.109230 ignition[806]: parsed url from cmdline: "" May 17 00:28:49.115529 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:28:49.109231 ignition[806]: no config URL provided May 17 00:28:49.120701 systemd-networkd[929]: lo: Link UP May 17 00:28:49.109234 ignition[806]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:28:49.120703 systemd-networkd[929]: lo: Gained carrier May 17 00:28:49.109283 ignition[806]: parsing config with SHA512: c29a6a678bf9f2aea86279e54f5a47ab25aa6f5fc47e2fa4ae58d8a745b5da7faa630a54fbce129a0296b17ec5a0d29a9c15e4277c12c600441028deb15cc73c May 17 00:28:49.123116 systemd-networkd[929]: Enumeration completed May 17 00:28:49.111488 ignition[806]: fetch-offline: fetch-offline passed May 17 00:28:49.123913 systemd-networkd[929]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:28:49.111490 ignition[806]: POST message to Packet Timeline May 17 00:28:49.130515 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:28:49.111493 ignition[806]: POST Status error: resource requires networking May 17 00:28:49.152353 systemd-networkd[929]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:28:49.111526 ignition[806]: Ignition finished successfully May 17 00:28:49.159777 systemd[1]: Reached target network.target - Network. May 17 00:28:49.211652 ignition[945]: Ignition 2.19.0 May 17 00:28:49.180347 systemd-networkd[929]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:28:49.211663 ignition[945]: Stage: kargs May 17 00:28:49.183355 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 17 00:28:49.211934 ignition[945]: no configs at "/usr/lib/ignition/base.d" May 17 00:28:49.196416 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 00:28:49.211953 ignition[945]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:28:49.418372 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up May 17 00:28:49.410115 systemd-networkd[929]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:28:49.213480 ignition[945]: kargs: kargs passed May 17 00:28:49.213487 ignition[945]: POST message to Packet Timeline May 17 00:28:49.213508 ignition[945]: GET https://metadata.packet.net/metadata: attempt #1 May 17 00:28:49.214698 ignition[945]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:42050->[::1]:53: read: connection refused May 17 00:28:49.415702 ignition[945]: GET https://metadata.packet.net/metadata: attempt #2 May 17 00:28:49.416253 ignition[945]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:45361->[::1]:53: read: connection refused May 17 00:28:49.703377 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up May 17 00:28:49.704529 systemd-networkd[929]: eno1: Link UP May 17 00:28:49.704719 systemd-networkd[929]: eno2: Link UP May 17 00:28:49.704897 systemd-networkd[929]: enp1s0f0np0: Link UP May 17 00:28:49.705111 systemd-networkd[929]: enp1s0f0np0: Gained carrier May 17 00:28:49.720510 systemd-networkd[929]: enp1s0f1np1: Link UP May 17 00:28:49.755447 systemd-networkd[929]: enp1s0f0np0: DHCPv4 address 147.75.203.231/31, gateway 147.75.203.230 acquired from 145.40.83.140 May 17 00:28:49.816373 ignition[945]: GET https://metadata.packet.net/metadata: attempt #3 May 17 00:28:49.817568 ignition[945]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:35214->[::1]:53: read: connection refused May 17 00:28:50.432819 systemd-networkd[929]: enp1s0f1np1: Gained carrier May 17 00:28:50.617917 ignition[945]: GET https://metadata.packet.net/metadata: attempt #4 May 17 00:28:50.619093 ignition[945]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:48362->[::1]:53: read: connection refused May 17 00:28:51.136855 systemd-networkd[929]: enp1s0f0np0: Gained IPv6LL May 17 00:28:52.220397 ignition[945]: GET https://metadata.packet.net/metadata: attempt #5 May 17 00:28:52.221535 ignition[945]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:44862->[::1]:53: read: connection refused May 17 00:28:52.480826 systemd-networkd[929]: enp1s0f1np1: Gained IPv6LL May 17 00:28:55.424241 ignition[945]: GET https://metadata.packet.net/metadata: attempt #6 May 17 00:28:56.404887 ignition[945]: GET result: OK May 17 00:28:56.874768 ignition[945]: Ignition finished successfully May 17 00:28:56.879647 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 00:28:56.912493 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 00:28:56.919351 ignition[964]: Ignition 2.19.0 May 17 00:28:56.919356 ignition[964]: Stage: disks May 17 00:28:56.919484 ignition[964]: no configs at "/usr/lib/ignition/base.d" May 17 00:28:56.919492 ignition[964]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:28:56.920121 ignition[964]: disks: disks passed May 17 00:28:56.920124 ignition[964]: POST message to Packet Timeline May 17 00:28:56.920134 ignition[964]: GET https://metadata.packet.net/metadata: attempt #1 May 17 00:28:57.873015 ignition[964]: GET result: OK May 17 00:28:58.202480 ignition[964]: Ignition finished successfully May 17 00:28:58.206034 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 00:28:58.222486 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 00:28:58.241488 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:28:58.263580 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:28:58.285553 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:28:58.305550 systemd[1]: Reached target basic.target - Basic System. May 17 00:28:58.337526 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 00:28:58.374178 systemd-fsck[982]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 17 00:28:58.384725 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 00:28:58.406491 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 00:28:58.525981 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 00:28:58.542495 kernel: EXT4-fs (sda9): mounted filesystem 278698a4-82b6-49b4-b6df-f7999ed4e35e r/w with ordered data mode. Quota mode: none. May 17 00:28:58.535753 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 00:28:58.562439 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:28:58.594730 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 00:28:58.697271 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (991) May 17 00:28:58.697285 kernel: BTRFS info (device sda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:28:58.697293 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:28:58.697301 kernel: BTRFS info (device sda6): using free space tree May 17 00:28:58.697308 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:28:58.697315 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:28:58.657168 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 17 00:28:58.708842 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... May 17 00:28:58.731383 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:28:58.731420 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:28:58.780542 coreos-metadata[998]: May 17 00:28:58.761 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 00:28:58.771530 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:28:58.818435 coreos-metadata[1009]: May 17 00:28:58.762 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 00:28:58.789576 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 00:28:58.820544 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 00:28:58.856390 initrd-setup-root[1023]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:28:58.866361 initrd-setup-root[1030]: cut: /sysroot/etc/group: No such file or directory May 17 00:28:58.877365 initrd-setup-root[1037]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:28:58.887369 initrd-setup-root[1044]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:28:58.904542 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 00:28:58.923459 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 00:28:58.949754 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 00:28:58.958382 kernel: BTRFS info (device sda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:28:58.958790 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 00:28:58.981495 ignition[1111]: INFO : Ignition 2.19.0 May 17 00:28:58.981495 ignition[1111]: INFO : Stage: mount May 17 00:28:59.005490 ignition[1111]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:28:59.005490 ignition[1111]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:28:59.005490 ignition[1111]: INFO : mount: mount passed May 17 00:28:59.005490 ignition[1111]: INFO : POST message to Packet Timeline May 17 00:28:59.005490 ignition[1111]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 17 00:28:58.983200 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 00:28:59.714845 coreos-metadata[1009]: May 17 00:28:59.714 INFO Fetch successful May 17 00:28:59.794582 systemd[1]: flatcar-static-network.service: Deactivated successfully. May 17 00:28:59.794640 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. May 17 00:28:59.826320 coreos-metadata[998]: May 17 00:28:59.802 INFO Fetch successful May 17 00:28:59.834322 ignition[1111]: INFO : GET result: OK May 17 00:28:59.842505 coreos-metadata[998]: May 17 00:28:59.833 INFO wrote hostname ci-4081.3.3-n-65a4af4639 to /sysroot/etc/hostname May 17 00:28:59.834363 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 00:29:00.227181 ignition[1111]: INFO : Ignition finished successfully May 17 00:29:00.230205 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 00:29:00.260542 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 00:29:00.272502 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:29:00.336226 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1133) May 17 00:29:00.336249 kernel: BTRFS info (device sda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:29:00.356503 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:29:00.374735 kernel: BTRFS info (device sda6): using free space tree May 17 00:29:00.414652 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:29:00.414675 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:29:00.428847 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:29:00.452870 ignition[1150]: INFO : Ignition 2.19.0 May 17 00:29:00.452870 ignition[1150]: INFO : Stage: files May 17 00:29:00.466485 ignition[1150]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:29:00.466485 ignition[1150]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:29:00.466485 ignition[1150]: DEBUG : files: compiled without relabeling support, skipping May 17 00:29:00.466485 ignition[1150]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:29:00.466485 ignition[1150]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:29:00.466485 ignition[1150]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:29:00.466485 ignition[1150]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:29:00.466485 ignition[1150]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:29:00.466485 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:29:00.466485 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 17 00:29:00.457349 unknown[1150]: wrote ssh authorized keys file for user: core May 17 00:29:00.599536 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 17 00:29:00.691211 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:29:00.691211 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 17 00:29:00.724519 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:29:00.724519 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:29:00.724519 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:29:00.724519 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:29:00.724519 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:29:00.724519 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:29:00.724519 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:29:00.724519 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:29:00.724519 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:29:00.724519 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:29:00.724519 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:29:00.724519 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:29:00.724519 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 May 17 00:29:01.495428 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 17 00:29:01.699806 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:29:01.699806 ignition[1150]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 17 00:29:01.730460 ignition[1150]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:29:01.730460 ignition[1150]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:29:01.730460 ignition[1150]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 17 00:29:01.730460 ignition[1150]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 17 00:29:01.730460 ignition[1150]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:29:01.730460 ignition[1150]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:29:01.730460 ignition[1150]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:29:01.730460 ignition[1150]: INFO : files: files passed May 17 00:29:01.730460 ignition[1150]: INFO : POST message to Packet Timeline May 17 00:29:01.730460 ignition[1150]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 17 00:29:02.707774 ignition[1150]: INFO : GET result: OK May 17 00:29:03.112336 ignition[1150]: INFO : Ignition finished successfully May 17 00:29:03.116295 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 00:29:03.144493 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 00:29:03.155835 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 00:29:03.176533 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:29:03.176600 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 00:29:03.224954 initrd-setup-root-after-ignition[1189]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:29:03.224954 initrd-setup-root-after-ignition[1189]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 00:29:03.263522 initrd-setup-root-after-ignition[1193]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:29:03.229452 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:29:03.240620 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 00:29:03.287545 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 00:29:03.334820 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:29:03.334945 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 00:29:03.344269 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 00:29:03.375547 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 00:29:03.395731 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 00:29:03.410655 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 00:29:03.481236 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:29:03.506636 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 00:29:03.535227 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 00:29:03.546768 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:29:03.567923 systemd[1]: Stopped target timers.target - Timer Units. May 17 00:29:03.585854 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:29:03.586273 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:29:03.625721 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 00:29:03.635865 systemd[1]: Stopped target basic.target - Basic System. May 17 00:29:03.654857 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 00:29:03.673862 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:29:03.694966 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 00:29:03.716862 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 00:29:03.736836 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:29:03.757885 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 00:29:03.779871 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 00:29:03.799952 systemd[1]: Stopped target swap.target - Swaps. May 17 00:29:03.817717 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:29:03.818114 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 00:29:03.853444 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 00:29:03.863531 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:29:03.884633 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 00:29:03.884867 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:29:03.906534 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:29:03.906703 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 00:29:03.944453 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:29:03.944659 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:29:03.965683 systemd[1]: Stopped target paths.target - Path Units. May 17 00:29:03.984644 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:29:03.988497 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:29:04.005698 systemd[1]: Stopped target slices.target - Slice Units. May 17 00:29:04.023694 systemd[1]: Stopped target sockets.target - Socket Units. May 17 00:29:04.043593 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:29:04.043721 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:29:04.066615 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:29:04.066739 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:29:04.085636 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:29:04.085801 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:29:04.204501 ignition[1213]: INFO : Ignition 2.19.0 May 17 00:29:04.204501 ignition[1213]: INFO : Stage: umount May 17 00:29:04.204501 ignition[1213]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:29:04.204501 ignition[1213]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:29:04.204501 ignition[1213]: INFO : umount: umount passed May 17 00:29:04.204501 ignition[1213]: INFO : POST message to Packet Timeline May 17 00:29:04.204501 ignition[1213]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 17 00:29:04.105636 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:29:04.105795 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 00:29:04.124634 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 17 00:29:04.124797 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 00:29:04.155511 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 00:29:04.172553 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:29:04.173011 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:29:04.205563 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 00:29:04.219503 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:29:04.219573 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:29:04.245582 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:29:04.245688 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:29:04.286804 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:29:04.288337 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:29:04.288556 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 00:29:04.305189 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:29:04.305463 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 00:29:05.724676 ignition[1213]: INFO : GET result: OK May 17 00:29:06.124573 ignition[1213]: INFO : Ignition finished successfully May 17 00:29:06.127432 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:29:06.127724 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 00:29:06.144522 systemd[1]: Stopped target network.target - Network. May 17 00:29:06.160471 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:29:06.160644 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 00:29:06.178639 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:29:06.178800 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 00:29:06.197662 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:29:06.197815 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 00:29:06.216625 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 00:29:06.216793 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 00:29:06.236643 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:29:06.236812 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 00:29:06.255008 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 00:29:06.266411 systemd-networkd[929]: enp1s0f0np0: DHCPv6 lease lost May 17 00:29:06.272770 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 00:29:06.275488 systemd-networkd[929]: enp1s0f1np1: DHCPv6 lease lost May 17 00:29:06.291221 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:29:06.291546 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 00:29:06.310477 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:29:06.310846 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 00:29:06.330611 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:29:06.330759 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 00:29:06.366436 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 00:29:06.387393 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:29:06.387435 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:29:06.406534 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:29:06.406622 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:29:06.427623 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:29:06.427784 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 00:29:06.446635 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 00:29:06.446799 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:29:06.466981 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:29:06.490505 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:29:06.490888 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:29:06.520301 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:29:06.520453 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 00:29:06.527743 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:29:06.527840 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:29:06.555513 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:29:06.555648 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 00:29:06.585836 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:29:06.586005 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 00:29:06.625395 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:29:06.625559 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:29:06.673302 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 00:29:06.689377 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:29:06.689412 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:29:06.717415 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:29:06.717470 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:29:06.740507 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:29:06.740763 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 00:29:06.968489 systemd-journald[266]: Received SIGTERM from PID 1 (systemd). May 17 00:29:06.810766 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:29:06.811036 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 00:29:06.829471 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 00:29:06.862672 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 00:29:06.909679 systemd[1]: Switching root. May 17 00:29:07.011379 systemd-journald[266]: Journal stopped May 17 00:28:44.026646 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri May 16 22:44:56 -00 2025 May 17 00:28:44.026659 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:28:44.026667 kernel: BIOS-provided physical RAM map: May 17 00:28:44.026671 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable May 17 00:28:44.026675 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved May 17 00:28:44.026679 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved May 17 00:28:44.026684 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable May 17 00:28:44.026688 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved May 17 00:28:44.026692 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081a73fff] usable May 17 00:28:44.026696 kernel: BIOS-e820: [mem 0x0000000081a74000-0x0000000081a74fff] ACPI NVS May 17 00:28:44.026700 kernel: BIOS-e820: [mem 0x0000000081a75000-0x0000000081a75fff] reserved May 17 00:28:44.026705 kernel: BIOS-e820: [mem 0x0000000081a76000-0x000000008afcdfff] usable May 17 00:28:44.026709 kernel: BIOS-e820: [mem 0x000000008afce000-0x000000008c0b2fff] reserved May 17 00:28:44.026714 kernel: BIOS-e820: [mem 0x000000008c0b3000-0x000000008c23bfff] usable May 17 00:28:44.026719 kernel: BIOS-e820: [mem 0x000000008c23c000-0x000000008c66dfff] ACPI NVS May 17 00:28:44.026724 kernel: BIOS-e820: [mem 0x000000008c66e000-0x000000008eefefff] reserved May 17 00:28:44.026729 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable May 17 00:28:44.026734 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved May 17 00:28:44.026739 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 17 00:28:44.026744 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved May 17 00:28:44.026748 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved May 17 00:28:44.026753 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved May 17 00:28:44.026758 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved May 17 00:28:44.026762 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable May 17 00:28:44.026767 kernel: NX (Execute Disable) protection: active May 17 00:28:44.026772 kernel: APIC: Static calls initialized May 17 00:28:44.026777 kernel: SMBIOS 3.2.1 present. May 17 00:28:44.026781 kernel: DMI: Supermicro X11SCM-F/X11SCM-F, BIOS 2.6 12/03/2024 May 17 00:28:44.026787 kernel: tsc: Detected 3400.000 MHz processor May 17 00:28:44.026792 kernel: tsc: Detected 3399.906 MHz TSC May 17 00:28:44.026796 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:28:44.026802 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:28:44.026806 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 May 17 00:28:44.026811 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs May 17 00:28:44.026816 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:28:44.026821 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 May 17 00:28:44.026826 kernel: Using GB pages for direct mapping May 17 00:28:44.026831 kernel: ACPI: Early table checksum verification disabled May 17 00:28:44.026836 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) May 17 00:28:44.026841 kernel: ACPI: XSDT 0x000000008C54F0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) May 17 00:28:44.026848 kernel: ACPI: FACP 0x000000008C58B670 000114 (v06 01072009 AMI 00010013) May 17 00:28:44.026853 kernel: ACPI: DSDT 0x000000008C54F268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) May 17 00:28:44.026858 kernel: ACPI: FACS 0x000000008C66DF80 000040 May 17 00:28:44.026863 kernel: ACPI: APIC 0x000000008C58B788 00012C (v04 01072009 AMI 00010013) May 17 00:28:44.026869 kernel: ACPI: FPDT 0x000000008C58B8B8 000044 (v01 01072009 AMI 00010013) May 17 00:28:44.026875 kernel: ACPI: FIDT 0x000000008C58B900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) May 17 00:28:44.026880 kernel: ACPI: MCFG 0x000000008C58B9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) May 17 00:28:44.026885 kernel: ACPI: SPMI 0x000000008C58B9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) May 17 00:28:44.026890 kernel: ACPI: SSDT 0x000000008C58BA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) May 17 00:28:44.026895 kernel: ACPI: SSDT 0x000000008C58D548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) May 17 00:28:44.026900 kernel: ACPI: SSDT 0x000000008C590710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) May 17 00:28:44.026906 kernel: ACPI: HPET 0x000000008C592A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) May 17 00:28:44.026911 kernel: ACPI: SSDT 0x000000008C592A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) May 17 00:28:44.026916 kernel: ACPI: SSDT 0x000000008C593A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) May 17 00:28:44.026921 kernel: ACPI: UEFI 0x000000008C594320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) May 17 00:28:44.026926 kernel: ACPI: LPIT 0x000000008C594368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) May 17 00:28:44.026931 kernel: ACPI: SSDT 0x000000008C594400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) May 17 00:28:44.026936 kernel: ACPI: SSDT 0x000000008C596BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) May 17 00:28:44.026942 kernel: ACPI: DBGP 0x000000008C5980C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) May 17 00:28:44.026947 kernel: ACPI: DBG2 0x000000008C598100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) May 17 00:28:44.026953 kernel: ACPI: SSDT 0x000000008C598158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) May 17 00:28:44.026958 kernel: ACPI: DMAR 0x000000008C599CC0 000070 (v01 INTEL EDK2 00000002 01000013) May 17 00:28:44.026963 kernel: ACPI: SSDT 0x000000008C599D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) May 17 00:28:44.026968 kernel: ACPI: TPM2 0x000000008C599E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) May 17 00:28:44.026973 kernel: ACPI: SSDT 0x000000008C599EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) May 17 00:28:44.026978 kernel: ACPI: WSMT 0x000000008C59AC40 000028 (v01 SUPERM 01072009 AMI 00010013) May 17 00:28:44.026983 kernel: ACPI: EINJ 0x000000008C59AC68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) May 17 00:28:44.026988 kernel: ACPI: ERST 0x000000008C59AD98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) May 17 00:28:44.026994 kernel: ACPI: BERT 0x000000008C59AFC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) May 17 00:28:44.026999 kernel: ACPI: HEST 0x000000008C59AFF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) May 17 00:28:44.027004 kernel: ACPI: SSDT 0x000000008C59B278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) May 17 00:28:44.027010 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58b670-0x8c58b783] May 17 00:28:44.027015 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54f268-0x8c58b66b] May 17 00:28:44.027020 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66df80-0x8c66dfbf] May 17 00:28:44.027025 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58b788-0x8c58b8b3] May 17 00:28:44.027030 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58b8b8-0x8c58b8fb] May 17 00:28:44.027035 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58b900-0x8c58b99b] May 17 00:28:44.027041 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58b9a0-0x8c58b9db] May 17 00:28:44.027046 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58b9e0-0x8c58ba20] May 17 00:28:44.027051 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58ba28-0x8c58d543] May 17 00:28:44.027056 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58d548-0x8c59070d] May 17 00:28:44.027061 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c590710-0x8c592a3a] May 17 00:28:44.027066 kernel: ACPI: Reserving HPET table memory at [mem 0x8c592a40-0x8c592a77] May 17 00:28:44.027071 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a78-0x8c593a25] May 17 00:28:44.027076 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593a28-0x8c59431b] May 17 00:28:44.027081 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c594320-0x8c594361] May 17 00:28:44.027087 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c594368-0x8c5943fb] May 17 00:28:44.027092 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c594400-0x8c596bdd] May 17 00:28:44.027097 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c596be0-0x8c5980c1] May 17 00:28:44.027102 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5980c8-0x8c5980fb] May 17 00:28:44.027107 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c598100-0x8c598153] May 17 00:28:44.027112 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598158-0x8c599cbe] May 17 00:28:44.027117 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c599cc0-0x8c599d2f] May 17 00:28:44.027122 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599d30-0x8c599e73] May 17 00:28:44.027127 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c599e78-0x8c599eab] May 17 00:28:44.027133 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599eb0-0x8c59ac3e] May 17 00:28:44.027138 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c59ac40-0x8c59ac67] May 17 00:28:44.027144 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c59ac68-0x8c59ad97] May 17 00:28:44.027149 kernel: ACPI: Reserving ERST table memory at [mem 0x8c59ad98-0x8c59afc7] May 17 00:28:44.027154 kernel: ACPI: Reserving BERT table memory at [mem 0x8c59afc8-0x8c59aff7] May 17 00:28:44.027159 kernel: ACPI: Reserving HEST table memory at [mem 0x8c59aff8-0x8c59b273] May 17 00:28:44.027164 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59b278-0x8c59b3d9] May 17 00:28:44.027169 kernel: No NUMA configuration found May 17 00:28:44.027174 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] May 17 00:28:44.027179 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] May 17 00:28:44.027185 kernel: Zone ranges: May 17 00:28:44.027190 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:28:44.027195 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 17 00:28:44.027200 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] May 17 00:28:44.027206 kernel: Movable zone start for each node May 17 00:28:44.027211 kernel: Early memory node ranges May 17 00:28:44.027216 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] May 17 00:28:44.027221 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] May 17 00:28:44.027226 kernel: node 0: [mem 0x0000000040400000-0x0000000081a73fff] May 17 00:28:44.027232 kernel: node 0: [mem 0x0000000081a76000-0x000000008afcdfff] May 17 00:28:44.027237 kernel: node 0: [mem 0x000000008c0b3000-0x000000008c23bfff] May 17 00:28:44.027244 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] May 17 00:28:44.027250 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] May 17 00:28:44.027259 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] May 17 00:28:44.027265 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:28:44.027270 kernel: On node 0, zone DMA: 103 pages in unavailable ranges May 17 00:28:44.027276 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges May 17 00:28:44.027282 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges May 17 00:28:44.027288 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges May 17 00:28:44.027293 kernel: On node 0, zone DMA32: 11459 pages in unavailable ranges May 17 00:28:44.027299 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges May 17 00:28:44.027304 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges May 17 00:28:44.027310 kernel: ACPI: PM-Timer IO Port: 0x1808 May 17 00:28:44.027315 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) May 17 00:28:44.027321 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) May 17 00:28:44.027326 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) May 17 00:28:44.027332 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) May 17 00:28:44.027338 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) May 17 00:28:44.027343 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) May 17 00:28:44.027348 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) May 17 00:28:44.027354 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) May 17 00:28:44.027359 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) May 17 00:28:44.027365 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) May 17 00:28:44.027370 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) May 17 00:28:44.027375 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) May 17 00:28:44.027382 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) May 17 00:28:44.027387 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) May 17 00:28:44.027392 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) May 17 00:28:44.027398 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) May 17 00:28:44.027403 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 May 17 00:28:44.027409 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 17 00:28:44.027414 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 00:28:44.027420 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:28:44.027425 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 17 00:28:44.027432 kernel: TSC deadline timer available May 17 00:28:44.027437 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs May 17 00:28:44.027443 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices May 17 00:28:44.027448 kernel: Booting paravirtualized kernel on bare hardware May 17 00:28:44.027454 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:28:44.027459 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 May 17 00:28:44.027465 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 May 17 00:28:44.027470 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 May 17 00:28:44.027476 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 May 17 00:28:44.027483 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:28:44.027488 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:28:44.027494 kernel: random: crng init done May 17 00:28:44.027499 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) May 17 00:28:44.027504 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) May 17 00:28:44.027510 kernel: Fallback order for Node 0: 0 May 17 00:28:44.027515 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232416 May 17 00:28:44.027521 kernel: Policy zone: Normal May 17 00:28:44.027527 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:28:44.027533 kernel: software IO TLB: area num 16. May 17 00:28:44.027538 kernel: Memory: 32720308K/33452984K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42872K init, 2320K bss, 732416K reserved, 0K cma-reserved) May 17 00:28:44.027544 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 May 17 00:28:44.027549 kernel: ftrace: allocating 37948 entries in 149 pages May 17 00:28:44.027555 kernel: ftrace: allocated 149 pages with 4 groups May 17 00:28:44.027560 kernel: Dynamic Preempt: voluntary May 17 00:28:44.027566 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 00:28:44.027572 kernel: rcu: RCU event tracing is enabled. May 17 00:28:44.027578 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. May 17 00:28:44.027584 kernel: Trampoline variant of Tasks RCU enabled. May 17 00:28:44.027589 kernel: Rude variant of Tasks RCU enabled. May 17 00:28:44.027595 kernel: Tracing variant of Tasks RCU enabled. May 17 00:28:44.027600 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:28:44.027606 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 May 17 00:28:44.027611 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 May 17 00:28:44.027616 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 00:28:44.027622 kernel: Console: colour dummy device 80x25 May 17 00:28:44.027627 kernel: printk: console [tty0] enabled May 17 00:28:44.027634 kernel: printk: console [ttyS1] enabled May 17 00:28:44.027639 kernel: ACPI: Core revision 20230628 May 17 00:28:44.027645 kernel: hpet: HPET dysfunctional in PC10. Force disabled. May 17 00:28:44.027650 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:28:44.027656 kernel: DMAR: Host address width 39 May 17 00:28:44.027661 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 May 17 00:28:44.027667 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da May 17 00:28:44.027672 kernel: DMAR: RMRR base: 0x0000008cf19000 end: 0x0000008d162fff May 17 00:28:44.027678 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 May 17 00:28:44.027684 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 May 17 00:28:44.027690 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. May 17 00:28:44.027695 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode May 17 00:28:44.027701 kernel: x2apic enabled May 17 00:28:44.027706 kernel: APIC: Switched APIC routing to: cluster x2apic May 17 00:28:44.027712 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns May 17 00:28:44.027717 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) May 17 00:28:44.027723 kernel: CPU0: Thermal monitoring enabled (TM1) May 17 00:28:44.027728 kernel: process: using mwait in idle threads May 17 00:28:44.027734 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 17 00:28:44.027739 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 17 00:28:44.027745 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:28:44.027750 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit May 17 00:28:44.027756 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall May 17 00:28:44.027761 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS May 17 00:28:44.027766 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT May 17 00:28:44.027772 kernel: RETBleed: Mitigation: Enhanced IBRS May 17 00:28:44.027777 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 17 00:28:44.027782 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 17 00:28:44.027788 kernel: TAA: Mitigation: TSX disabled May 17 00:28:44.027794 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers May 17 00:28:44.027799 kernel: SRBDS: Mitigation: Microcode May 17 00:28:44.027805 kernel: GDS: Mitigation: Microcode May 17 00:28:44.027810 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:28:44.027816 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:28:44.027821 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:28:44.027827 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' May 17 00:28:44.027832 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' May 17 00:28:44.027837 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:28:44.027843 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 May 17 00:28:44.027848 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 May 17 00:28:44.027854 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. May 17 00:28:44.027860 kernel: Freeing SMP alternatives memory: 32K May 17 00:28:44.027865 kernel: pid_max: default: 32768 minimum: 301 May 17 00:28:44.027871 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 17 00:28:44.027876 kernel: landlock: Up and running. May 17 00:28:44.027881 kernel: SELinux: Initializing. May 17 00:28:44.027887 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:28:44.027892 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:28:44.027898 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) May 17 00:28:44.027903 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. May 17 00:28:44.027909 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. May 17 00:28:44.027915 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. May 17 00:28:44.027921 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. May 17 00:28:44.027926 kernel: ... version: 4 May 17 00:28:44.027932 kernel: ... bit width: 48 May 17 00:28:44.027937 kernel: ... generic registers: 4 May 17 00:28:44.027942 kernel: ... value mask: 0000ffffffffffff May 17 00:28:44.027948 kernel: ... max period: 00007fffffffffff May 17 00:28:44.027953 kernel: ... fixed-purpose events: 3 May 17 00:28:44.027959 kernel: ... event mask: 000000070000000f May 17 00:28:44.027965 kernel: signal: max sigframe size: 2032 May 17 00:28:44.027971 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 May 17 00:28:44.027976 kernel: rcu: Hierarchical SRCU implementation. May 17 00:28:44.027982 kernel: rcu: Max phase no-delay instances is 400. May 17 00:28:44.027987 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. May 17 00:28:44.027993 kernel: smp: Bringing up secondary CPUs ... May 17 00:28:44.027998 kernel: smpboot: x86: Booting SMP configuration: May 17 00:28:44.028004 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 May 17 00:28:44.028009 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. May 17 00:28:44.028016 kernel: smp: Brought up 1 node, 16 CPUs May 17 00:28:44.028021 kernel: smpboot: Max logical packages: 1 May 17 00:28:44.028027 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) May 17 00:28:44.028032 kernel: devtmpfs: initialized May 17 00:28:44.028038 kernel: x86/mm: Memory block size: 128MB May 17 00:28:44.028043 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81a74000-0x81a74fff] (4096 bytes) May 17 00:28:44.028049 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23c000-0x8c66dfff] (4399104 bytes) May 17 00:28:44.028054 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:28:44.028061 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) May 17 00:28:44.028066 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:28:44.028071 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:28:44.028077 kernel: audit: initializing netlink subsys (disabled) May 17 00:28:44.028082 kernel: audit: type=2000 audit(1747441718.038:1): state=initialized audit_enabled=0 res=1 May 17 00:28:44.028088 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:28:44.028093 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:28:44.028098 kernel: cpuidle: using governor menu May 17 00:28:44.028104 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:28:44.028110 kernel: dca service started, version 1.12.1 May 17 00:28:44.028116 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) May 17 00:28:44.028121 kernel: PCI: Using configuration type 1 for base access May 17 00:28:44.028127 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' May 17 00:28:44.028132 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:28:44.028137 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:28:44.028143 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 17 00:28:44.028162 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:28:44.028167 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 17 00:28:44.028173 kernel: ACPI: Added _OSI(Module Device) May 17 00:28:44.028179 kernel: ACPI: Added _OSI(Processor Device) May 17 00:28:44.028184 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:28:44.028189 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:28:44.028195 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded May 17 00:28:44.028200 kernel: ACPI: Dynamic OEM Table Load: May 17 00:28:44.028205 kernel: ACPI: SSDT 0xFFFF988940E3A800 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) May 17 00:28:44.028211 kernel: ACPI: Dynamic OEM Table Load: May 17 00:28:44.028216 kernel: ACPI: SSDT 0xFFFF988941E0A800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) May 17 00:28:44.028222 kernel: ACPI: Dynamic OEM Table Load: May 17 00:28:44.028228 kernel: ACPI: SSDT 0xFFFF988940DE4000 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) May 17 00:28:44.028233 kernel: ACPI: Dynamic OEM Table Load: May 17 00:28:44.028238 kernel: ACPI: SSDT 0xFFFF988941E08800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) May 17 00:28:44.028245 kernel: ACPI: Dynamic OEM Table Load: May 17 00:28:44.028250 kernel: ACPI: SSDT 0xFFFF988940E51000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) May 17 00:28:44.028256 kernel: ACPI: Dynamic OEM Table Load: May 17 00:28:44.028261 kernel: ACPI: SSDT 0xFFFF988940FA4800 00030A (v02 PmRef ApCst 00003000 INTL 20160527) May 17 00:28:44.028266 kernel: ACPI: _OSC evaluated successfully for all CPUs May 17 00:28:44.028272 kernel: ACPI: Interpreter enabled May 17 00:28:44.028278 kernel: ACPI: PM: (supports S0 S5) May 17 00:28:44.028283 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:28:44.028289 kernel: HEST: Enabling Firmware First mode for corrected errors. May 17 00:28:44.028294 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. May 17 00:28:44.028299 kernel: HEST: Table parsing has been initialized. May 17 00:28:44.028305 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. May 17 00:28:44.028310 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:28:44.028315 kernel: PCI: Ignoring E820 reservations for host bridge windows May 17 00:28:44.028321 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F May 17 00:28:44.028327 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource May 17 00:28:44.028333 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource May 17 00:28:44.028338 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource May 17 00:28:44.028344 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource May 17 00:28:44.028349 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource May 17 00:28:44.028354 kernel: ACPI: \_TZ_.FN00: New power resource May 17 00:28:44.028360 kernel: ACPI: \_TZ_.FN01: New power resource May 17 00:28:44.028365 kernel: ACPI: \_TZ_.FN02: New power resource May 17 00:28:44.028370 kernel: ACPI: \_TZ_.FN03: New power resource May 17 00:28:44.028376 kernel: ACPI: \_TZ_.FN04: New power resource May 17 00:28:44.028382 kernel: ACPI: \PIN_: New power resource May 17 00:28:44.028387 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) May 17 00:28:44.028460 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:28:44.028514 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] May 17 00:28:44.028561 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] May 17 00:28:44.028569 kernel: PCI host bridge to bus 0000:00 May 17 00:28:44.028622 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 17 00:28:44.028665 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 17 00:28:44.028707 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 17 00:28:44.028749 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] May 17 00:28:44.028791 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] May 17 00:28:44.028831 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] May 17 00:28:44.028888 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 May 17 00:28:44.028947 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 May 17 00:28:44.028997 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold May 17 00:28:44.029049 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 May 17 00:28:44.029098 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] May 17 00:28:44.029149 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 May 17 00:28:44.029197 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] May 17 00:28:44.029255 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 May 17 00:28:44.029304 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] May 17 00:28:44.029352 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold May 17 00:28:44.029402 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 May 17 00:28:44.029450 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] May 17 00:28:44.029496 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] May 17 00:28:44.029551 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 May 17 00:28:44.029598 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] May 17 00:28:44.029652 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 May 17 00:28:44.029699 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] May 17 00:28:44.029751 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 May 17 00:28:44.029798 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] May 17 00:28:44.029847 kernel: pci 0000:00:16.0: PME# supported from D3hot May 17 00:28:44.029897 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 May 17 00:28:44.029945 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] May 17 00:28:44.030001 kernel: pci 0000:00:16.1: PME# supported from D3hot May 17 00:28:44.030052 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 May 17 00:28:44.030100 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] May 17 00:28:44.030146 kernel: pci 0000:00:16.4: PME# supported from D3hot May 17 00:28:44.030200 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 May 17 00:28:44.030251 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] May 17 00:28:44.030338 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] May 17 00:28:44.030387 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] May 17 00:28:44.030435 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] May 17 00:28:44.030482 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] May 17 00:28:44.030529 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] May 17 00:28:44.030578 kernel: pci 0000:00:17.0: PME# supported from D3hot May 17 00:28:44.030631 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 May 17 00:28:44.030682 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold May 17 00:28:44.030738 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 May 17 00:28:44.030790 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold May 17 00:28:44.030841 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 May 17 00:28:44.030890 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold May 17 00:28:44.030941 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 May 17 00:28:44.030991 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold May 17 00:28:44.031042 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 May 17 00:28:44.031093 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold May 17 00:28:44.031145 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 May 17 00:28:44.031193 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] May 17 00:28:44.031310 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 May 17 00:28:44.031365 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 May 17 00:28:44.031413 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] May 17 00:28:44.031463 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] May 17 00:28:44.031518 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 May 17 00:28:44.031565 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] May 17 00:28:44.031620 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 May 17 00:28:44.031669 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] May 17 00:28:44.031718 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] May 17 00:28:44.031769 kernel: pci 0000:01:00.0: PME# supported from D3cold May 17 00:28:44.031818 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] May 17 00:28:44.031867 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) May 17 00:28:44.031922 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 May 17 00:28:44.031972 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] May 17 00:28:44.032020 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] May 17 00:28:44.032069 kernel: pci 0000:01:00.1: PME# supported from D3cold May 17 00:28:44.032119 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] May 17 00:28:44.032169 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) May 17 00:28:44.032218 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 17 00:28:44.032269 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] May 17 00:28:44.032347 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] May 17 00:28:44.032395 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] May 17 00:28:44.032449 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect May 17 00:28:44.032498 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 May 17 00:28:44.032550 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] May 17 00:28:44.032598 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] May 17 00:28:44.032646 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] May 17 00:28:44.032695 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold May 17 00:28:44.032744 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] May 17 00:28:44.032792 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] May 17 00:28:44.032839 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] May 17 00:28:44.032896 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect May 17 00:28:44.032945 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 May 17 00:28:44.032995 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] May 17 00:28:44.033044 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] May 17 00:28:44.033093 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] May 17 00:28:44.033142 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold May 17 00:28:44.033191 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] May 17 00:28:44.033239 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] May 17 00:28:44.033336 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] May 17 00:28:44.033385 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] May 17 00:28:44.033438 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 May 17 00:28:44.033488 kernel: pci 0000:06:00.0: enabling Extended Tags May 17 00:28:44.033537 kernel: pci 0000:06:00.0: supports D1 D2 May 17 00:28:44.033587 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 17 00:28:44.033634 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] May 17 00:28:44.033685 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] May 17 00:28:44.033732 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] May 17 00:28:44.033788 kernel: pci_bus 0000:07: extended config space not accessible May 17 00:28:44.033844 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 May 17 00:28:44.033897 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] May 17 00:28:44.033948 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] May 17 00:28:44.033998 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] May 17 00:28:44.034051 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 17 00:28:44.034102 kernel: pci 0000:07:00.0: supports D1 D2 May 17 00:28:44.034151 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 17 00:28:44.034201 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] May 17 00:28:44.034252 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] May 17 00:28:44.034302 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] May 17 00:28:44.034310 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 May 17 00:28:44.034316 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 May 17 00:28:44.034324 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 May 17 00:28:44.034330 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 May 17 00:28:44.034335 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 May 17 00:28:44.034341 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 May 17 00:28:44.034347 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 May 17 00:28:44.034352 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 May 17 00:28:44.034358 kernel: iommu: Default domain type: Translated May 17 00:28:44.034363 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:28:44.034369 kernel: PCI: Using ACPI for IRQ routing May 17 00:28:44.034376 kernel: PCI: pci_cache_line_size set to 64 bytes May 17 00:28:44.034381 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] May 17 00:28:44.034387 kernel: e820: reserve RAM buffer [mem 0x81a74000-0x83ffffff] May 17 00:28:44.034392 kernel: e820: reserve RAM buffer [mem 0x8afce000-0x8bffffff] May 17 00:28:44.034398 kernel: e820: reserve RAM buffer [mem 0x8c23c000-0x8fffffff] May 17 00:28:44.034403 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] May 17 00:28:44.034409 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] May 17 00:28:44.034458 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device May 17 00:28:44.034509 kernel: pci 0000:07:00.0: vgaarb: bridge control possible May 17 00:28:44.034563 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 17 00:28:44.034572 kernel: vgaarb: loaded May 17 00:28:44.034578 kernel: clocksource: Switched to clocksource tsc-early May 17 00:28:44.034583 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:28:44.034589 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:28:44.034595 kernel: pnp: PnP ACPI init May 17 00:28:44.034645 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved May 17 00:28:44.034694 kernel: pnp 00:02: [dma 0 disabled] May 17 00:28:44.034743 kernel: pnp 00:03: [dma 0 disabled] May 17 00:28:44.034794 kernel: system 00:04: [io 0x0680-0x069f] has been reserved May 17 00:28:44.034838 kernel: system 00:04: [io 0x164e-0x164f] has been reserved May 17 00:28:44.034885 kernel: system 00:05: [io 0x1854-0x1857] has been reserved May 17 00:28:44.034930 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved May 17 00:28:44.034975 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved May 17 00:28:44.035021 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved May 17 00:28:44.035065 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved May 17 00:28:44.035110 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved May 17 00:28:44.035155 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved May 17 00:28:44.035198 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved May 17 00:28:44.035245 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved May 17 00:28:44.035293 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved May 17 00:28:44.035339 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved May 17 00:28:44.035384 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved May 17 00:28:44.035427 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved May 17 00:28:44.035470 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved May 17 00:28:44.035513 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved May 17 00:28:44.035556 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved May 17 00:28:44.035603 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved May 17 00:28:44.035613 kernel: pnp: PnP ACPI: found 10 devices May 17 00:28:44.035619 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:28:44.035626 kernel: NET: Registered PF_INET protocol family May 17 00:28:44.035632 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:28:44.035637 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) May 17 00:28:44.035643 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:28:44.035649 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:28:44.035655 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 17 00:28:44.035661 kernel: TCP: Hash tables configured (established 262144 bind 65536) May 17 00:28:44.035667 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) May 17 00:28:44.035673 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) May 17 00:28:44.035679 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:28:44.035684 kernel: NET: Registered PF_XDP protocol family May 17 00:28:44.035732 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] May 17 00:28:44.035780 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] May 17 00:28:44.035829 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] May 17 00:28:44.035880 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] May 17 00:28:44.035931 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] May 17 00:28:44.035981 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] May 17 00:28:44.036028 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] May 17 00:28:44.036077 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 17 00:28:44.036124 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] May 17 00:28:44.036172 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] May 17 00:28:44.036219 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] May 17 00:28:44.036272 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] May 17 00:28:44.036319 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] May 17 00:28:44.036366 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] May 17 00:28:44.036413 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] May 17 00:28:44.036461 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] May 17 00:28:44.036510 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] May 17 00:28:44.036558 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] May 17 00:28:44.036607 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] May 17 00:28:44.036655 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] May 17 00:28:44.036705 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] May 17 00:28:44.036751 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] May 17 00:28:44.036799 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] May 17 00:28:44.036846 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] May 17 00:28:44.036891 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc May 17 00:28:44.036936 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 17 00:28:44.036978 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 17 00:28:44.037020 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 17 00:28:44.037062 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] May 17 00:28:44.037103 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] May 17 00:28:44.037152 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] May 17 00:28:44.037197 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] May 17 00:28:44.037257 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] May 17 00:28:44.037301 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] May 17 00:28:44.037351 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] May 17 00:28:44.037395 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] May 17 00:28:44.037444 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] May 17 00:28:44.037489 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] May 17 00:28:44.037537 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] May 17 00:28:44.037583 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] May 17 00:28:44.037591 kernel: PCI: CLS 64 bytes, default 64 May 17 00:28:44.037597 kernel: DMAR: No ATSR found May 17 00:28:44.037603 kernel: DMAR: No SATC found May 17 00:28:44.037609 kernel: DMAR: dmar0: Using Queued invalidation May 17 00:28:44.037657 kernel: pci 0000:00:00.0: Adding to iommu group 0 May 17 00:28:44.037706 kernel: pci 0000:00:01.0: Adding to iommu group 1 May 17 00:28:44.037753 kernel: pci 0000:00:08.0: Adding to iommu group 2 May 17 00:28:44.037803 kernel: pci 0000:00:12.0: Adding to iommu group 3 May 17 00:28:44.037849 kernel: pci 0000:00:14.0: Adding to iommu group 4 May 17 00:28:44.037897 kernel: pci 0000:00:14.2: Adding to iommu group 4 May 17 00:28:44.037943 kernel: pci 0000:00:15.0: Adding to iommu group 5 May 17 00:28:44.037990 kernel: pci 0000:00:15.1: Adding to iommu group 5 May 17 00:28:44.038037 kernel: pci 0000:00:16.0: Adding to iommu group 6 May 17 00:28:44.038084 kernel: pci 0000:00:16.1: Adding to iommu group 6 May 17 00:28:44.038131 kernel: pci 0000:00:16.4: Adding to iommu group 6 May 17 00:28:44.038181 kernel: pci 0000:00:17.0: Adding to iommu group 7 May 17 00:28:44.038229 kernel: pci 0000:00:1b.0: Adding to iommu group 8 May 17 00:28:44.038279 kernel: pci 0000:00:1b.4: Adding to iommu group 9 May 17 00:28:44.038327 kernel: pci 0000:00:1b.5: Adding to iommu group 10 May 17 00:28:44.038374 kernel: pci 0000:00:1c.0: Adding to iommu group 11 May 17 00:28:44.038422 kernel: pci 0000:00:1c.3: Adding to iommu group 12 May 17 00:28:44.038468 kernel: pci 0000:00:1e.0: Adding to iommu group 13 May 17 00:28:44.038516 kernel: pci 0000:00:1f.0: Adding to iommu group 14 May 17 00:28:44.038565 kernel: pci 0000:00:1f.4: Adding to iommu group 14 May 17 00:28:44.038613 kernel: pci 0000:00:1f.5: Adding to iommu group 14 May 17 00:28:44.038661 kernel: pci 0000:01:00.0: Adding to iommu group 1 May 17 00:28:44.038711 kernel: pci 0000:01:00.1: Adding to iommu group 1 May 17 00:28:44.038759 kernel: pci 0000:03:00.0: Adding to iommu group 15 May 17 00:28:44.038808 kernel: pci 0000:04:00.0: Adding to iommu group 16 May 17 00:28:44.038857 kernel: pci 0000:06:00.0: Adding to iommu group 17 May 17 00:28:44.038906 kernel: pci 0000:07:00.0: Adding to iommu group 17 May 17 00:28:44.038916 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O May 17 00:28:44.038923 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 17 00:28:44.038928 kernel: software IO TLB: mapped [mem 0x0000000086fce000-0x000000008afce000] (64MB) May 17 00:28:44.038934 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer May 17 00:28:44.038940 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules May 17 00:28:44.038946 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules May 17 00:28:44.038951 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules May 17 00:28:44.039003 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) May 17 00:28:44.039014 kernel: Initialise system trusted keyrings May 17 00:28:44.039020 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 May 17 00:28:44.039025 kernel: Key type asymmetric registered May 17 00:28:44.039031 kernel: Asymmetric key parser 'x509' registered May 17 00:28:44.039036 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 17 00:28:44.039042 kernel: io scheduler mq-deadline registered May 17 00:28:44.039048 kernel: io scheduler kyber registered May 17 00:28:44.039054 kernel: io scheduler bfq registered May 17 00:28:44.039100 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 May 17 00:28:44.039150 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 May 17 00:28:44.039197 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 May 17 00:28:44.039248 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 May 17 00:28:44.039334 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 May 17 00:28:44.039382 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 May 17 00:28:44.039433 kernel: thermal LNXTHERM:00: registered as thermal_zone0 May 17 00:28:44.039442 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) May 17 00:28:44.039448 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. May 17 00:28:44.039456 kernel: pstore: Using crash dump compression: deflate May 17 00:28:44.039462 kernel: pstore: Registered erst as persistent store backend May 17 00:28:44.039468 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:28:44.039473 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:28:44.039479 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:28:44.039485 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A May 17 00:28:44.039490 kernel: hpet_acpi_add: no address or irqs in _CRS May 17 00:28:44.039542 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) May 17 00:28:44.039552 kernel: i8042: PNP: No PS/2 controller found. May 17 00:28:44.039596 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 May 17 00:28:44.039639 kernel: rtc_cmos rtc_cmos: registered as rtc0 May 17 00:28:44.039684 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-05-17T00:28:42 UTC (1747441722) May 17 00:28:44.039728 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram May 17 00:28:44.039736 kernel: intel_pstate: Intel P-state driver initializing May 17 00:28:44.039742 kernel: intel_pstate: Disabling energy efficiency optimization May 17 00:28:44.039748 kernel: intel_pstate: HWP enabled May 17 00:28:44.039755 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 May 17 00:28:44.039761 kernel: vesafb: scrolling: redraw May 17 00:28:44.039766 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 May 17 00:28:44.039772 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x00000000a96ba51f, using 768k, total 768k May 17 00:28:44.039778 kernel: Console: switching to colour frame buffer device 128x48 May 17 00:28:44.039784 kernel: fb0: VESA VGA frame buffer device May 17 00:28:44.039789 kernel: NET: Registered PF_INET6 protocol family May 17 00:28:44.039795 kernel: Segment Routing with IPv6 May 17 00:28:44.039801 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:28:44.039808 kernel: NET: Registered PF_PACKET protocol family May 17 00:28:44.039814 kernel: Key type dns_resolver registered May 17 00:28:44.039819 kernel: microcode: Current revision: 0x00000102 May 17 00:28:44.039825 kernel: microcode: Microcode Update Driver: v2.2. May 17 00:28:44.039830 kernel: IPI shorthand broadcast: enabled May 17 00:28:44.039836 kernel: sched_clock: Marking stable (2482072145, 1378392188)->(4396076078, -535611745) May 17 00:28:44.039842 kernel: registered taskstats version 1 May 17 00:28:44.039847 kernel: Loading compiled-in X.509 certificates May 17 00:28:44.039853 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 85b8d1234ceca483cb3defc2030d93f7792663c9' May 17 00:28:44.039860 kernel: Key type .fscrypt registered May 17 00:28:44.039865 kernel: Key type fscrypt-provisioning registered May 17 00:28:44.039871 kernel: ima: Allocated hash algorithm: sha1 May 17 00:28:44.039877 kernel: ima: No architecture policies found May 17 00:28:44.039882 kernel: clk: Disabling unused clocks May 17 00:28:44.039888 kernel: Freeing unused kernel image (initmem) memory: 42872K May 17 00:28:44.039894 kernel: Write protecting the kernel read-only data: 36864k May 17 00:28:44.039899 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 17 00:28:44.039905 kernel: Run /init as init process May 17 00:28:44.039912 kernel: with arguments: May 17 00:28:44.039918 kernel: /init May 17 00:28:44.039923 kernel: with environment: May 17 00:28:44.039929 kernel: HOME=/ May 17 00:28:44.039934 kernel: TERM=linux May 17 00:28:44.039940 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:28:44.039947 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:28:44.039954 systemd[1]: Detected architecture x86-64. May 17 00:28:44.039961 systemd[1]: Running in initrd. May 17 00:28:44.039967 systemd[1]: No hostname configured, using default hostname. May 17 00:28:44.039973 systemd[1]: Hostname set to . May 17 00:28:44.039978 systemd[1]: Initializing machine ID from random generator. May 17 00:28:44.039984 systemd[1]: Queued start job for default target initrd.target. May 17 00:28:44.039990 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:28:44.039996 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:28:44.040002 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 00:28:44.040009 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:28:44.040015 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 00:28:44.040021 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 00:28:44.040028 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 00:28:44.040034 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz May 17 00:28:44.040040 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns May 17 00:28:44.040046 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 00:28:44.040052 kernel: clocksource: Switched to clocksource tsc May 17 00:28:44.040058 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:28:44.040064 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:28:44.040070 systemd[1]: Reached target paths.target - Path Units. May 17 00:28:44.040076 systemd[1]: Reached target slices.target - Slice Units. May 17 00:28:44.040083 systemd[1]: Reached target swap.target - Swaps. May 17 00:28:44.040089 systemd[1]: Reached target timers.target - Timer Units. May 17 00:28:44.040094 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:28:44.040102 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:28:44.040108 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:28:44.040114 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:28:44.040120 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:28:44.040126 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:28:44.040132 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:28:44.040137 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:28:44.040143 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 00:28:44.040149 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:28:44.040156 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 00:28:44.040162 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:28:44.040168 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:28:44.040184 systemd-journald[266]: Collecting audit messages is disabled. May 17 00:28:44.040199 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:28:44.040206 systemd-journald[266]: Journal started May 17 00:28:44.040219 systemd-journald[266]: Runtime Journal (/run/log/journal/d6c2772e212947a58dc659cf2cfc6464) is 8.0M, max 639.9M, 631.9M free. May 17 00:28:44.063237 systemd-modules-load[268]: Inserted module 'overlay' May 17 00:28:44.085260 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:28:44.113916 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 00:28:44.183486 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:28:44.183499 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:28:44.183512 kernel: Bridge firewalling registered May 17 00:28:44.169433 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:28:44.173382 systemd-modules-load[268]: Inserted module 'br_netfilter' May 17 00:28:44.195582 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:28:44.216558 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:28:44.225581 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:28:44.261639 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:28:44.274237 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:28:44.302801 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:28:44.311052 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:28:44.314508 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:28:44.315508 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:28:44.316056 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:28:44.321410 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:28:44.322249 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:28:44.322677 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:28:44.326535 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:28:44.339731 systemd-resolved[303]: Positive Trust Anchors: May 17 00:28:44.339738 systemd-resolved[303]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:28:44.339765 systemd-resolved[303]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:28:44.341488 systemd-resolved[303]: Defaulting to hostname 'linux'. May 17 00:28:44.368490 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:28:44.385702 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:28:44.424800 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 00:28:44.534671 dracut-cmdline[307]: dracut-dracut-053 May 17 00:28:44.542459 dracut-cmdline[307]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:28:44.739287 kernel: SCSI subsystem initialized May 17 00:28:44.762286 kernel: Loading iSCSI transport class v2.0-870. May 17 00:28:44.785281 kernel: iscsi: registered transport (tcp) May 17 00:28:44.816820 kernel: iscsi: registered transport (qla4xxx) May 17 00:28:44.816837 kernel: QLogic iSCSI HBA Driver May 17 00:28:44.850076 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 00:28:44.874515 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 00:28:44.932015 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:28:44.932034 kernel: device-mapper: uevent: version 1.0.3 May 17 00:28:44.951700 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 17 00:28:45.009328 kernel: raid6: avx2x4 gen() 53560 MB/s May 17 00:28:45.041330 kernel: raid6: avx2x2 gen() 54060 MB/s May 17 00:28:45.077679 kernel: raid6: avx2x1 gen() 45193 MB/s May 17 00:28:45.077696 kernel: raid6: using algorithm avx2x2 gen() 54060 MB/s May 17 00:28:45.124752 kernel: raid6: .... xor() 30878 MB/s, rmw enabled May 17 00:28:45.124772 kernel: raid6: using avx2x2 recovery algorithm May 17 00:28:45.165287 kernel: xor: automatically using best checksumming function avx May 17 00:28:45.279279 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 00:28:45.284939 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 00:28:45.315586 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:28:45.322721 systemd-udevd[491]: Using default interface naming scheme 'v255'. May 17 00:28:45.325138 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:28:45.361496 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 00:28:45.403420 dracut-pre-trigger[503]: rd.md=0: removing MD RAID activation May 17 00:28:45.420577 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:28:45.447580 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:28:45.507586 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:28:45.564749 kernel: pps_core: LinuxPPS API ver. 1 registered May 17 00:28:45.564771 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 17 00:28:45.564785 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:28:45.571249 kernel: libata version 3.00 loaded. May 17 00:28:45.580252 kernel: ACPI: bus type USB registered May 17 00:28:45.580290 kernel: PTP clock support registered May 17 00:28:45.580299 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:28:45.582601 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 00:28:45.743678 kernel: usbcore: registered new interface driver usbfs May 17 00:28:45.743695 kernel: AES CTR mode by8 optimization enabled May 17 00:28:45.743705 kernel: ahci 0000:00:17.0: version 3.0 May 17 00:28:45.743831 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode May 17 00:28:45.743934 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst May 17 00:28:45.744033 kernel: usbcore: registered new interface driver hub May 17 00:28:45.744051 kernel: scsi host0: ahci May 17 00:28:45.744152 kernel: usbcore: registered new device driver usb May 17 00:28:45.744167 kernel: scsi host1: ahci May 17 00:28:45.744264 kernel: scsi host2: ahci May 17 00:28:45.758249 kernel: scsi host3: ahci May 17 00:28:45.771797 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 00:28:45.809352 kernel: scsi host4: ahci May 17 00:28:45.809448 kernel: scsi host5: ahci May 17 00:28:45.809513 kernel: scsi host6: ahci May 17 00:28:45.787481 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:28:45.944638 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 May 17 00:28:45.944656 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 May 17 00:28:45.944664 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 May 17 00:28:45.944672 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 May 17 00:28:45.944679 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 May 17 00:28:45.944686 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 May 17 00:28:45.944693 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 May 17 00:28:45.944700 kernel: igb: Intel(R) Gigabit Ethernet Network Driver May 17 00:28:45.944707 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. May 17 00:28:45.924319 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:28:45.963475 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:28:46.001818 kernel: igb 0000:03:00.0: added PHC on eth0 May 17 00:28:46.001915 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection May 17 00:28:46.001885 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:28:46.053112 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:e6:78 May 17 00:28:46.053216 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 May 17 00:28:46.053327 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) May 17 00:28:46.002007 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:28:46.100770 kernel: mlx5_core 0000:01:00.0: firmware version: 14.31.1014 May 17 00:28:46.100859 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) May 17 00:28:46.100927 kernel: igb 0000:04:00.0: added PHC on eth1 May 17 00:28:46.100995 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection May 17 00:28:46.107651 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:28:46.171352 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:e6:79 May 17 00:28:46.171512 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 May 17 00:28:46.171580 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) May 17 00:28:46.167491 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 00:28:46.171396 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:28:46.171560 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:28:46.275236 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 17 00:28:46.275254 kernel: ata7: SATA link down (SStatus 0 SControl 300) May 17 00:28:46.275262 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 17 00:28:46.275270 kernel: ata3: SATA link down (SStatus 0 SControl 300) May 17 00:28:46.275276 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) May 17 00:28:46.192294 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:28:46.311932 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) May 17 00:28:46.311944 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 17 00:28:46.311952 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 May 17 00:28:46.198505 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:28:46.386580 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 May 17 00:28:46.386592 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA May 17 00:28:46.386603 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) May 17 00:28:46.386693 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA May 17 00:28:46.386702 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged May 17 00:28:46.386769 kernel: ata1.00: Features: NCQ-prio May 17 00:28:46.299600 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:28:46.490193 kernel: ata2.00: Features: NCQ-prio May 17 00:28:46.490210 kernel: ata1.00: configured for UDMA/133 May 17 00:28:46.490221 kernel: ata2.00: configured for UDMA/133 May 17 00:28:46.490231 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 May 17 00:28:46.490315 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 May 17 00:28:46.490380 kernel: igb 0000:04:00.0 eno2: renamed from eth1 May 17 00:28:46.299651 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:28:46.344456 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 00:28:46.543097 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller May 17 00:28:46.543200 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 May 17 00:28:46.543275 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 May 17 00:28:46.543340 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller May 17 00:28:46.497043 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:28:46.595975 kernel: igb 0000:03:00.0 eno1: renamed from eth0 May 17 00:28:46.596059 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 May 17 00:28:46.596129 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed May 17 00:28:46.596193 kernel: hub 1-0:1.0: USB hub found May 17 00:28:46.615052 kernel: hub 1-0:1.0: 16 ports detected May 17 00:28:46.623458 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:28:46.637500 kernel: hub 2-0:1.0: USB hub found May 17 00:28:46.637592 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 17 00:28:46.637671 kernel: hub 2-0:1.0: 10 ports detected May 17 00:28:46.642296 kernel: mlx5_core 0000:01:00.1: firmware version: 14.31.1014 May 17 00:28:46.681272 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) May 17 00:28:46.712757 kernel: ata1.00: Enabling discard_zeroes_data May 17 00:28:46.712780 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) May 17 00:28:46.712880 kernel: ata2.00: Enabling discard_zeroes_data May 17 00:28:46.724965 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks May 17 00:28:46.725060 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) May 17 00:28:46.737688 kernel: sd 0:0:0:0: [sda] Write Protect is off May 17 00:28:46.737771 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 May 17 00:28:46.737844 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 17 00:28:46.737916 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks May 17 00:28:46.751469 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes May 17 00:28:46.751556 kernel: sd 1:0:0:0: [sdb] Write Protect is off May 17 00:28:46.752394 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:28:47.124623 kernel: ata1.00: Enabling discard_zeroes_data May 17 00:28:47.124640 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 May 17 00:28:47.124743 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:28:47.124752 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 17 00:28:47.124817 kernel: GPT:9289727 != 937703087 May 17 00:28:47.124825 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:28:47.124831 kernel: GPT:9289727 != 937703087 May 17 00:28:47.124840 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:28:47.124847 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:28:47.124854 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes May 17 00:28:47.124915 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 17 00:28:47.124975 kernel: ata2.00: Enabling discard_zeroes_data May 17 00:28:47.124983 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd May 17 00:28:47.125087 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk May 17 00:28:47.125150 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) May 17 00:28:47.125221 kernel: hub 1-14:1.0: USB hub found May 17 00:28:47.125338 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged May 17 00:28:47.125407 kernel: hub 1-14:1.0: 4 ports detected May 17 00:28:47.125474 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (560) May 17 00:28:47.125482 kernel: BTRFS: device fsid 7f88d479-6686-439c-8052-b96f0a9d77bc devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (576) May 17 00:28:47.135794 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. May 17 00:28:47.179608 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:28:47.194574 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. May 17 00:28:47.251196 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. May 17 00:28:47.271641 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. May 17 00:28:47.345382 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 17 00:28:47.345475 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 May 17 00:28:47.345541 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 May 17 00:28:47.345604 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd May 17 00:28:47.294340 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. May 17 00:28:47.405363 kernel: ata1.00: Enabling discard_zeroes_data May 17 00:28:47.405377 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:28:47.351336 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 00:28:47.425041 kernel: ata1.00: Enabling discard_zeroes_data May 17 00:28:47.425098 disk-uuid[725]: Primary Header is updated. May 17 00:28:47.425098 disk-uuid[725]: Secondary Entries is updated. May 17 00:28:47.425098 disk-uuid[725]: Secondary Header is updated. May 17 00:28:47.473332 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:28:47.473344 kernel: ata1.00: Enabling discard_zeroes_data May 17 00:28:47.473351 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:28:47.503272 kernel: hid: raw HID events driver (C) Jiri Kosina May 17 00:28:47.526318 kernel: usbcore: registered new interface driver usbhid May 17 00:28:47.526336 kernel: usbhid: USB HID core driver May 17 00:28:47.572251 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 May 17 00:28:47.669680 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 May 17 00:28:47.669809 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 May 17 00:28:47.704643 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 May 17 00:28:48.472814 kernel: ata1.00: Enabling discard_zeroes_data May 17 00:28:48.493949 disk-uuid[727]: The operation has completed successfully. May 17 00:28:48.503424 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:28:48.525846 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:28:48.525893 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 00:28:48.557533 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 00:28:48.596344 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 17 00:28:48.596406 sh[745]: Success May 17 00:28:48.626026 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 00:28:48.650473 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 00:28:48.658657 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 00:28:48.710770 kernel: BTRFS info (device dm-0): first mount of filesystem 7f88d479-6686-439c-8052-b96f0a9d77bc May 17 00:28:48.710789 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 17 00:28:48.732623 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 17 00:28:48.752091 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 17 00:28:48.770511 kernel: BTRFS info (device dm-0): using free space tree May 17 00:28:48.809248 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 17 00:28:48.811614 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 00:28:48.820807 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 00:28:48.832571 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 00:28:48.856797 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 00:28:48.962229 kernel: BTRFS info (device sda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:28:48.962246 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:28:48.962280 kernel: BTRFS info (device sda6): using free space tree May 17 00:28:48.962302 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:28:48.962309 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:28:48.986248 kernel: BTRFS info (device sda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:28:48.988075 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 00:28:49.013908 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 00:28:49.080832 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:28:49.109146 ignition[806]: Ignition 2.19.0 May 17 00:28:49.109151 ignition[806]: Stage: fetch-offline May 17 00:28:49.109444 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:28:49.109172 ignition[806]: no configs at "/usr/lib/ignition/base.d" May 17 00:28:49.111237 unknown[806]: fetched base config from "system" May 17 00:28:49.109177 ignition[806]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:28:49.111241 unknown[806]: fetched user config from "system" May 17 00:28:49.109230 ignition[806]: parsed url from cmdline: "" May 17 00:28:49.115529 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:28:49.109231 ignition[806]: no config URL provided May 17 00:28:49.120701 systemd-networkd[929]: lo: Link UP May 17 00:28:49.109234 ignition[806]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:28:49.120703 systemd-networkd[929]: lo: Gained carrier May 17 00:28:49.109283 ignition[806]: parsing config with SHA512: c29a6a678bf9f2aea86279e54f5a47ab25aa6f5fc47e2fa4ae58d8a745b5da7faa630a54fbce129a0296b17ec5a0d29a9c15e4277c12c600441028deb15cc73c May 17 00:28:49.123116 systemd-networkd[929]: Enumeration completed May 17 00:28:49.111488 ignition[806]: fetch-offline: fetch-offline passed May 17 00:28:49.123913 systemd-networkd[929]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:28:49.111490 ignition[806]: POST message to Packet Timeline May 17 00:28:49.130515 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:28:49.111493 ignition[806]: POST Status error: resource requires networking May 17 00:28:49.152353 systemd-networkd[929]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:28:49.111526 ignition[806]: Ignition finished successfully May 17 00:28:49.159777 systemd[1]: Reached target network.target - Network. May 17 00:28:49.211652 ignition[945]: Ignition 2.19.0 May 17 00:28:49.180347 systemd-networkd[929]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:28:49.211663 ignition[945]: Stage: kargs May 17 00:28:49.183355 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 17 00:28:49.211934 ignition[945]: no configs at "/usr/lib/ignition/base.d" May 17 00:28:49.196416 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 00:28:49.211953 ignition[945]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:28:49.418372 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up May 17 00:28:49.410115 systemd-networkd[929]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:28:49.213480 ignition[945]: kargs: kargs passed May 17 00:28:49.213487 ignition[945]: POST message to Packet Timeline May 17 00:28:49.213508 ignition[945]: GET https://metadata.packet.net/metadata: attempt #1 May 17 00:28:49.214698 ignition[945]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:42050->[::1]:53: read: connection refused May 17 00:28:49.415702 ignition[945]: GET https://metadata.packet.net/metadata: attempt #2 May 17 00:28:49.416253 ignition[945]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:45361->[::1]:53: read: connection refused May 17 00:28:49.703377 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up May 17 00:28:49.704529 systemd-networkd[929]: eno1: Link UP May 17 00:28:49.704719 systemd-networkd[929]: eno2: Link UP May 17 00:28:49.704897 systemd-networkd[929]: enp1s0f0np0: Link UP May 17 00:28:49.705111 systemd-networkd[929]: enp1s0f0np0: Gained carrier May 17 00:28:49.720510 systemd-networkd[929]: enp1s0f1np1: Link UP May 17 00:28:49.755447 systemd-networkd[929]: enp1s0f0np0: DHCPv4 address 147.75.203.231/31, gateway 147.75.203.230 acquired from 145.40.83.140 May 17 00:28:49.816373 ignition[945]: GET https://metadata.packet.net/metadata: attempt #3 May 17 00:28:49.817568 ignition[945]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:35214->[::1]:53: read: connection refused May 17 00:28:50.432819 systemd-networkd[929]: enp1s0f1np1: Gained carrier May 17 00:28:50.617917 ignition[945]: GET https://metadata.packet.net/metadata: attempt #4 May 17 00:28:50.619093 ignition[945]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:48362->[::1]:53: read: connection refused May 17 00:28:51.136855 systemd-networkd[929]: enp1s0f0np0: Gained IPv6LL May 17 00:28:52.220397 ignition[945]: GET https://metadata.packet.net/metadata: attempt #5 May 17 00:28:52.221535 ignition[945]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:44862->[::1]:53: read: connection refused May 17 00:28:52.480826 systemd-networkd[929]: enp1s0f1np1: Gained IPv6LL May 17 00:28:55.424241 ignition[945]: GET https://metadata.packet.net/metadata: attempt #6 May 17 00:28:56.404887 ignition[945]: GET result: OK May 17 00:28:56.874768 ignition[945]: Ignition finished successfully May 17 00:28:56.879647 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 00:28:56.912493 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 00:28:56.919351 ignition[964]: Ignition 2.19.0 May 17 00:28:56.919356 ignition[964]: Stage: disks May 17 00:28:56.919484 ignition[964]: no configs at "/usr/lib/ignition/base.d" May 17 00:28:56.919492 ignition[964]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:28:56.920121 ignition[964]: disks: disks passed May 17 00:28:56.920124 ignition[964]: POST message to Packet Timeline May 17 00:28:56.920134 ignition[964]: GET https://metadata.packet.net/metadata: attempt #1 May 17 00:28:57.873015 ignition[964]: GET result: OK May 17 00:28:58.202480 ignition[964]: Ignition finished successfully May 17 00:28:58.206034 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 00:28:58.222486 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 00:28:58.241488 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:28:58.263580 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:28:58.285553 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:28:58.305550 systemd[1]: Reached target basic.target - Basic System. May 17 00:28:58.337526 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 00:28:58.374178 systemd-fsck[982]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 17 00:28:58.384725 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 00:28:58.406491 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 00:28:58.525981 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 00:28:58.542495 kernel: EXT4-fs (sda9): mounted filesystem 278698a4-82b6-49b4-b6df-f7999ed4e35e r/w with ordered data mode. Quota mode: none. May 17 00:28:58.535753 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 00:28:58.562439 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:28:58.594730 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 00:28:58.697271 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (991) May 17 00:28:58.697285 kernel: BTRFS info (device sda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:28:58.697293 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:28:58.697301 kernel: BTRFS info (device sda6): using free space tree May 17 00:28:58.697308 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:28:58.697315 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:28:58.657168 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 17 00:28:58.708842 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... May 17 00:28:58.731383 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:28:58.731420 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:28:58.780542 coreos-metadata[998]: May 17 00:28:58.761 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 00:28:58.771530 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:28:58.818435 coreos-metadata[1009]: May 17 00:28:58.762 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 00:28:58.789576 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 00:28:58.820544 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 00:28:58.856390 initrd-setup-root[1023]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:28:58.866361 initrd-setup-root[1030]: cut: /sysroot/etc/group: No such file or directory May 17 00:28:58.877365 initrd-setup-root[1037]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:28:58.887369 initrd-setup-root[1044]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:28:58.904542 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 00:28:58.923459 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 00:28:58.949754 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 00:28:58.958382 kernel: BTRFS info (device sda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:28:58.958790 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 00:28:58.981495 ignition[1111]: INFO : Ignition 2.19.0 May 17 00:28:58.981495 ignition[1111]: INFO : Stage: mount May 17 00:28:59.005490 ignition[1111]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:28:59.005490 ignition[1111]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:28:59.005490 ignition[1111]: INFO : mount: mount passed May 17 00:28:59.005490 ignition[1111]: INFO : POST message to Packet Timeline May 17 00:28:59.005490 ignition[1111]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 17 00:28:58.983200 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 00:28:59.714845 coreos-metadata[1009]: May 17 00:28:59.714 INFO Fetch successful May 17 00:28:59.794582 systemd[1]: flatcar-static-network.service: Deactivated successfully. May 17 00:28:59.794640 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. May 17 00:28:59.826320 coreos-metadata[998]: May 17 00:28:59.802 INFO Fetch successful May 17 00:28:59.834322 ignition[1111]: INFO : GET result: OK May 17 00:28:59.842505 coreos-metadata[998]: May 17 00:28:59.833 INFO wrote hostname ci-4081.3.3-n-65a4af4639 to /sysroot/etc/hostname May 17 00:28:59.834363 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 00:29:00.227181 ignition[1111]: INFO : Ignition finished successfully May 17 00:29:00.230205 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 00:29:00.260542 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 00:29:00.272502 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:29:00.336226 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1133) May 17 00:29:00.336249 kernel: BTRFS info (device sda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:29:00.356503 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:29:00.374735 kernel: BTRFS info (device sda6): using free space tree May 17 00:29:00.414652 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:29:00.414675 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:29:00.428847 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:29:00.452870 ignition[1150]: INFO : Ignition 2.19.0 May 17 00:29:00.452870 ignition[1150]: INFO : Stage: files May 17 00:29:00.466485 ignition[1150]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:29:00.466485 ignition[1150]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:29:00.466485 ignition[1150]: DEBUG : files: compiled without relabeling support, skipping May 17 00:29:00.466485 ignition[1150]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:29:00.466485 ignition[1150]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:29:00.466485 ignition[1150]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:29:00.466485 ignition[1150]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:29:00.466485 ignition[1150]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:29:00.466485 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:29:00.466485 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 17 00:29:00.457349 unknown[1150]: wrote ssh authorized keys file for user: core May 17 00:29:00.599536 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 17 00:29:00.691211 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:29:00.691211 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 17 00:29:00.724519 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:29:00.724519 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:29:00.724519 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:29:00.724519 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:29:00.724519 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:29:00.724519 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:29:00.724519 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:29:00.724519 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:29:00.724519 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:29:00.724519 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:29:00.724519 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:29:00.724519 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:29:00.724519 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 May 17 00:29:01.495428 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 17 00:29:01.699806 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:29:01.699806 ignition[1150]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 17 00:29:01.730460 ignition[1150]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:29:01.730460 ignition[1150]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:29:01.730460 ignition[1150]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 17 00:29:01.730460 ignition[1150]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 17 00:29:01.730460 ignition[1150]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:29:01.730460 ignition[1150]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:29:01.730460 ignition[1150]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:29:01.730460 ignition[1150]: INFO : files: files passed May 17 00:29:01.730460 ignition[1150]: INFO : POST message to Packet Timeline May 17 00:29:01.730460 ignition[1150]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 17 00:29:02.707774 ignition[1150]: INFO : GET result: OK May 17 00:29:03.112336 ignition[1150]: INFO : Ignition finished successfully May 17 00:29:03.116295 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 00:29:03.144493 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 00:29:03.155835 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 00:29:03.176533 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:29:03.176600 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 00:29:03.224954 initrd-setup-root-after-ignition[1189]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:29:03.224954 initrd-setup-root-after-ignition[1189]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 00:29:03.263522 initrd-setup-root-after-ignition[1193]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:29:03.229452 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:29:03.240620 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 00:29:03.287545 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 00:29:03.334820 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:29:03.334945 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 00:29:03.344269 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 00:29:03.375547 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 00:29:03.395731 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 00:29:03.410655 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 00:29:03.481236 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:29:03.506636 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 00:29:03.535227 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 00:29:03.546768 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:29:03.567923 systemd[1]: Stopped target timers.target - Timer Units. May 17 00:29:03.585854 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:29:03.586273 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:29:03.625721 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 00:29:03.635865 systemd[1]: Stopped target basic.target - Basic System. May 17 00:29:03.654857 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 00:29:03.673862 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:29:03.694966 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 00:29:03.716862 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 00:29:03.736836 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:29:03.757885 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 00:29:03.779871 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 00:29:03.799952 systemd[1]: Stopped target swap.target - Swaps. May 17 00:29:03.817717 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:29:03.818114 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 00:29:03.853444 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 00:29:03.863531 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:29:03.884633 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 00:29:03.884867 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:29:03.906534 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:29:03.906703 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 00:29:03.944453 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:29:03.944659 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:29:03.965683 systemd[1]: Stopped target paths.target - Path Units. May 17 00:29:03.984644 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:29:03.988497 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:29:04.005698 systemd[1]: Stopped target slices.target - Slice Units. May 17 00:29:04.023694 systemd[1]: Stopped target sockets.target - Socket Units. May 17 00:29:04.043593 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:29:04.043721 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:29:04.066615 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:29:04.066739 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:29:04.085636 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:29:04.085801 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:29:04.204501 ignition[1213]: INFO : Ignition 2.19.0 May 17 00:29:04.204501 ignition[1213]: INFO : Stage: umount May 17 00:29:04.204501 ignition[1213]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:29:04.204501 ignition[1213]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:29:04.204501 ignition[1213]: INFO : umount: umount passed May 17 00:29:04.204501 ignition[1213]: INFO : POST message to Packet Timeline May 17 00:29:04.204501 ignition[1213]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 17 00:29:04.105636 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:29:04.105795 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 00:29:04.124634 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 17 00:29:04.124797 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 00:29:04.155511 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 00:29:04.172553 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:29:04.173011 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:29:04.205563 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 00:29:04.219503 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:29:04.219573 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:29:04.245582 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:29:04.245688 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:29:04.286804 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:29:04.288337 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:29:04.288556 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 00:29:04.305189 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:29:04.305463 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 00:29:05.724676 ignition[1213]: INFO : GET result: OK May 17 00:29:06.124573 ignition[1213]: INFO : Ignition finished successfully May 17 00:29:06.127432 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:29:06.127724 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 00:29:06.144522 systemd[1]: Stopped target network.target - Network. May 17 00:29:06.160471 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:29:06.160644 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 00:29:06.178639 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:29:06.178800 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 00:29:06.197662 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:29:06.197815 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 00:29:06.216625 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 00:29:06.216793 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 00:29:06.236643 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:29:06.236812 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 00:29:06.255008 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 00:29:06.266411 systemd-networkd[929]: enp1s0f0np0: DHCPv6 lease lost May 17 00:29:06.272770 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 00:29:06.275488 systemd-networkd[929]: enp1s0f1np1: DHCPv6 lease lost May 17 00:29:06.291221 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:29:06.291546 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 00:29:06.310477 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:29:06.310846 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 00:29:06.330611 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:29:06.330759 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 00:29:06.366436 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 00:29:06.387393 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:29:06.387435 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:29:06.406534 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:29:06.406622 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:29:06.427623 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:29:06.427784 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 00:29:06.446635 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 00:29:06.446799 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:29:06.466981 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:29:06.490505 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:29:06.490888 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:29:06.520301 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:29:06.520453 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 00:29:06.527743 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:29:06.527840 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:29:06.555513 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:29:06.555648 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 00:29:06.585836 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:29:06.586005 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 00:29:06.625395 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:29:06.625559 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:29:06.673302 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 00:29:06.689377 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:29:06.689412 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:29:06.717415 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:29:06.717470 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:29:06.740507 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:29:06.740763 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 00:29:06.968489 systemd-journald[266]: Received SIGTERM from PID 1 (systemd). May 17 00:29:06.810766 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:29:06.811036 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 00:29:06.829471 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 00:29:06.862672 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 00:29:06.909679 systemd[1]: Switching root. May 17 00:29:07.011379 systemd-journald[266]: Journal stopped May 17 00:29:09.697162 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:29:09.697177 kernel: SELinux: policy capability open_perms=1 May 17 00:29:09.697184 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:29:09.697191 kernel: SELinux: policy capability always_check_network=0 May 17 00:29:09.697196 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:29:09.697201 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:29:09.697207 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:29:09.697212 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:29:09.697217 kernel: audit: type=1403 audit(1747441747.295:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:29:09.697224 systemd[1]: Successfully loaded SELinux policy in 176.825ms. May 17 00:29:09.697232 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.917ms. May 17 00:29:09.697239 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:29:09.697248 systemd[1]: Detected architecture x86-64. May 17 00:29:09.697254 systemd[1]: Detected first boot. May 17 00:29:09.697261 systemd[1]: Hostname set to . May 17 00:29:09.697269 systemd[1]: Initializing machine ID from random generator. May 17 00:29:09.697275 zram_generator::config[1267]: No configuration found. May 17 00:29:09.697282 systemd[1]: Populated /etc with preset unit settings. May 17 00:29:09.697288 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 17 00:29:09.697294 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 17 00:29:09.697301 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 17 00:29:09.697307 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 17 00:29:09.697315 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 17 00:29:09.697321 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 17 00:29:09.697327 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 17 00:29:09.697334 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 17 00:29:09.697340 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 17 00:29:09.697347 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 17 00:29:09.697353 systemd[1]: Created slice user.slice - User and Session Slice. May 17 00:29:09.697361 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:29:09.697368 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:29:09.697374 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 17 00:29:09.697381 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 17 00:29:09.697387 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 17 00:29:09.697394 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:29:09.697400 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... May 17 00:29:09.697406 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:29:09.697414 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 17 00:29:09.697420 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 17 00:29:09.697427 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 17 00:29:09.697435 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 17 00:29:09.697441 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:29:09.697448 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:29:09.697455 systemd[1]: Reached target slices.target - Slice Units. May 17 00:29:09.697463 systemd[1]: Reached target swap.target - Swaps. May 17 00:29:09.697469 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 17 00:29:09.697476 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 17 00:29:09.697482 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:29:09.697489 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:29:09.697495 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:29:09.697503 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 17 00:29:09.697510 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 17 00:29:09.697517 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 17 00:29:09.697523 systemd[1]: Mounting media.mount - External Media Directory... May 17 00:29:09.697530 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:29:09.697537 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 17 00:29:09.697543 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 17 00:29:09.697551 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 17 00:29:09.697558 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:29:09.697565 systemd[1]: Reached target machines.target - Containers. May 17 00:29:09.697572 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 17 00:29:09.697578 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:29:09.697585 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:29:09.697592 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 17 00:29:09.697598 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:29:09.697605 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:29:09.697613 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:29:09.697620 kernel: ACPI: bus type drm_connector registered May 17 00:29:09.697626 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 17 00:29:09.697633 kernel: fuse: init (API version 7.39) May 17 00:29:09.697639 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:29:09.697647 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:29:09.697654 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 17 00:29:09.697661 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 17 00:29:09.697668 kernel: loop: module loaded May 17 00:29:09.697675 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 17 00:29:09.697681 systemd[1]: Stopped systemd-fsck-usr.service. May 17 00:29:09.697688 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:29:09.697702 systemd-journald[1370]: Collecting audit messages is disabled. May 17 00:29:09.697717 systemd-journald[1370]: Journal started May 17 00:29:09.697731 systemd-journald[1370]: Runtime Journal (/run/log/journal/49904bcf7e2949cf8cf42830290febba) is 8.0M, max 639.9M, 631.9M free. May 17 00:29:07.811048 systemd[1]: Queued start job for default target multi-user.target. May 17 00:29:07.825409 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 17 00:29:07.825703 systemd[1]: systemd-journald.service: Deactivated successfully. May 17 00:29:09.725252 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:29:09.760293 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 17 00:29:09.794252 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 17 00:29:09.828294 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:29:09.861766 systemd[1]: verity-setup.service: Deactivated successfully. May 17 00:29:09.861797 systemd[1]: Stopped verity-setup.service. May 17 00:29:09.924291 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:29:09.945455 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:29:09.954832 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 17 00:29:09.964529 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 17 00:29:09.974530 systemd[1]: Mounted media.mount - External Media Directory. May 17 00:29:09.984492 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 17 00:29:09.994487 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 17 00:29:10.004495 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 17 00:29:10.015613 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 17 00:29:10.027727 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:29:10.039106 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:29:10.039419 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 17 00:29:10.051187 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:29:10.051706 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:29:10.063177 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:29:10.063680 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:29:10.074165 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:29:10.074662 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:29:10.086396 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:29:10.086899 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 17 00:29:10.097172 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:29:10.097588 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:29:10.108183 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:29:10.119147 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 17 00:29:10.131186 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 17 00:29:10.143151 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:29:10.178417 systemd[1]: Reached target network-pre.target - Preparation for Network. May 17 00:29:10.205614 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 17 00:29:10.217066 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 17 00:29:10.226464 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:29:10.226491 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:29:10.237176 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 17 00:29:10.254516 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 17 00:29:10.266797 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 17 00:29:10.276535 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:29:10.277569 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 17 00:29:10.287860 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 17 00:29:10.299383 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:29:10.300004 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 17 00:29:10.303305 systemd-journald[1370]: Time spent on flushing to /var/log/journal/49904bcf7e2949cf8cf42830290febba is 23.977ms for 1369 entries. May 17 00:29:10.303305 systemd-journald[1370]: System Journal (/var/log/journal/49904bcf7e2949cf8cf42830290febba) is 8.0M, max 195.6M, 187.6M free. May 17 00:29:10.359623 systemd-journald[1370]: Received client request to flush runtime journal. May 17 00:29:10.359665 kernel: loop0: detected capacity change from 0 to 142488 May 17 00:29:10.317383 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:29:10.317989 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:29:10.326388 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 17 00:29:10.336079 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 17 00:29:10.346979 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 17 00:29:10.374418 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 17 00:29:10.399575 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 17 00:29:10.413266 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:29:10.425485 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 17 00:29:10.436472 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 17 00:29:10.448463 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 17 00:29:10.465461 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:29:10.475249 kernel: loop1: detected capacity change from 0 to 221472 May 17 00:29:10.483465 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 17 00:29:10.496287 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 17 00:29:10.523490 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 17 00:29:10.540973 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:29:10.544292 kernel: loop2: detected capacity change from 0 to 8 May 17 00:29:10.555834 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:29:10.565503 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 17 00:29:10.577971 udevadm[1405]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 17 00:29:10.579742 systemd-tmpfiles[1419]: ACLs are not supported, ignoring. May 17 00:29:10.579752 systemd-tmpfiles[1419]: ACLs are not supported, ignoring. May 17 00:29:10.582332 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:29:10.607249 kernel: loop3: detected capacity change from 0 to 140768 May 17 00:29:10.652589 ldconfig[1396]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:29:10.654122 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 17 00:29:10.686255 kernel: loop4: detected capacity change from 0 to 142488 May 17 00:29:10.737256 kernel: loop5: detected capacity change from 0 to 221472 May 17 00:29:10.746036 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 17 00:29:10.768249 kernel: loop6: detected capacity change from 0 to 8 May 17 00:29:10.787329 kernel: loop7: detected capacity change from 0 to 140768 May 17 00:29:10.792435 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:29:10.800149 (sd-merge)[1427]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. May 17 00:29:10.800448 (sd-merge)[1427]: Merged extensions into '/usr'. May 17 00:29:10.804908 systemd-udevd[1429]: Using default interface naming scheme 'v255'. May 17 00:29:10.806037 systemd[1]: Reloading requested from client PID 1401 ('systemd-sysext') (unit systemd-sysext.service)... May 17 00:29:10.806044 systemd[1]: Reloading... May 17 00:29:10.838259 zram_generator::config[1454]: No configuration found. May 17 00:29:10.864257 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1472) May 17 00:29:10.873256 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:29:10.873305 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 May 17 00:29:10.885049 kernel: IPMI message handler: version 39.2 May 17 00:29:10.885133 kernel: ACPI: button: Sleep Button [SLPB] May 17 00:29:10.927251 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set May 17 00:29:10.927427 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 17 00:29:10.939911 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt May 17 00:29:10.946256 kernel: ACPI: button: Power Button [PWRF] May 17 00:29:10.950249 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) May 17 00:29:10.980288 kernel: ipmi device interface May 17 00:29:11.097252 kernel: iTCO_vendor_support: vendor-support=0 May 17 00:29:11.097337 kernel: ipmi_si: IPMI System Interface driver May 17 00:29:11.097368 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface May 17 00:29:11.098535 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface May 17 00:29:11.140967 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:29:11.168412 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS May 17 00:29:11.186539 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 May 17 00:29:11.193610 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. May 17 00:29:11.193782 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. May 17 00:29:11.203346 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine May 17 00:29:11.203364 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI May 17 00:29:11.203468 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 May 17 00:29:11.228067 systemd[1]: Reloading finished in 421 ms. May 17 00:29:11.257899 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI May 17 00:29:11.273998 kernel: ipmi_si: Adding ACPI-specified kcs state machine May 17 00:29:11.294290 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 May 17 00:29:11.337250 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) May 17 00:29:11.337402 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) May 17 00:29:11.344248 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. May 17 00:29:11.414250 kernel: intel_rapl_common: Found RAPL domain package May 17 00:29:11.414278 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) May 17 00:29:11.414376 kernel: intel_rapl_common: Found RAPL domain core May 17 00:29:11.414410 kernel: intel_rapl_common: Found RAPL domain dram May 17 00:29:11.472450 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:29:11.484492 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 17 00:29:11.514283 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized May 17 00:29:11.532248 kernel: ipmi_ssif: IPMI SSIF Interface driver May 17 00:29:11.541533 systemd[1]: Starting ensure-sysext.service... May 17 00:29:11.548944 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 17 00:29:11.560152 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:29:11.577410 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:29:11.585425 systemd-tmpfiles[1607]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:29:11.585657 systemd-tmpfiles[1607]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 17 00:29:11.586154 systemd-tmpfiles[1607]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:29:11.586329 systemd-tmpfiles[1607]: ACLs are not supported, ignoring. May 17 00:29:11.586366 systemd-tmpfiles[1607]: ACLs are not supported, ignoring. May 17 00:29:11.588030 systemd-tmpfiles[1607]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:29:11.588034 systemd-tmpfiles[1607]: Skipping /boot May 17 00:29:11.588937 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:29:11.592282 systemd-tmpfiles[1607]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:29:11.592287 systemd-tmpfiles[1607]: Skipping /boot May 17 00:29:11.599537 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 17 00:29:11.610472 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 17 00:29:11.610622 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:29:11.629373 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:29:11.630274 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 17 00:29:11.630868 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 17 00:29:11.631454 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 17 00:29:11.632447 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:29:11.633081 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 17 00:29:11.633974 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 17 00:29:11.634534 systemd[1]: Reloading requested from client PID 1601 ('systemctl') (unit ensure-sysext.service)... May 17 00:29:11.634541 systemd[1]: Reloading... May 17 00:29:11.641997 lvm[1620]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:29:11.654900 augenrules[1648]: No rules May 17 00:29:11.673251 zram_generator::config[1669]: No configuration found. May 17 00:29:11.722702 systemd-networkd[1605]: lo: Link UP May 17 00:29:11.722706 systemd-networkd[1605]: lo: Gained carrier May 17 00:29:11.725170 systemd-networkd[1605]: bond0: netdev ready May 17 00:29:11.726075 systemd-networkd[1605]: Enumeration completed May 17 00:29:11.732572 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:29:11.737480 systemd-networkd[1605]: enp1s0f0np0: Configuring with /etc/systemd/network/10-1c:34:da:5c:3c:74.network. May 17 00:29:11.788654 systemd[1]: Reloading finished in 153 ms. May 17 00:29:11.806589 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 17 00:29:11.816379 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:29:11.842465 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:29:11.853505 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:29:11.863451 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 17 00:29:11.874436 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 17 00:29:11.885448 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 17 00:29:11.913425 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 17 00:29:11.925785 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:29:11.935416 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:29:11.935572 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:29:11.941539 systemd-resolved[1623]: Positive Trust Anchors: May 17 00:29:11.941546 systemd-resolved[1623]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:29:11.941569 systemd-resolved[1623]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:29:11.944457 systemd-resolved[1623]: Using system hostname 'ci-4081.3.3-n-65a4af4639'. May 17 00:29:11.946523 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 17 00:29:11.948446 lvm[1747]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:29:11.957959 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:29:11.968879 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:29:11.980864 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:29:11.991377 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:29:12.003785 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 17 00:29:12.015904 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 17 00:29:12.025300 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:29:12.025382 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:29:12.026161 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 17 00:29:12.038515 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:29:12.038587 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:29:12.049482 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:29:12.049550 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:29:12.060474 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:29:12.060540 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:29:12.070499 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 17 00:29:12.083117 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:29:12.083248 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:29:12.092387 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:29:12.103869 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:29:12.114854 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:29:12.134446 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:29:12.145331 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:29:12.145410 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:29:12.145461 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:29:12.146063 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:29:12.146132 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:29:12.158524 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:29:12.158589 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:29:12.168488 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:29:12.168560 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:29:12.179482 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:29:12.179551 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:29:12.190151 systemd[1]: Finished ensure-sysext.service. May 17 00:29:12.199673 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:29:12.199704 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:29:12.209368 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 17 00:29:12.244066 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 17 00:29:12.255362 systemd[1]: Reached target time-set.target - System Time Set. May 17 00:29:12.661286 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up May 17 00:29:12.683577 systemd-networkd[1605]: enp1s0f1np1: Configuring with /etc/systemd/network/10-1c:34:da:5c:3c:75.network. May 17 00:29:12.684277 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link May 17 00:29:12.894317 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up May 17 00:29:12.917797 systemd-networkd[1605]: bond0: Configuring with /etc/systemd/network/05-bond0.network. May 17 00:29:12.918257 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link May 17 00:29:12.919863 systemd-networkd[1605]: enp1s0f0np0: Link UP May 17 00:29:12.920335 systemd-networkd[1605]: enp1s0f0np0: Gained carrier May 17 00:29:12.921261 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:29:12.939252 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond May 17 00:29:12.946405 systemd-networkd[1605]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-1c:34:da:5c:3c:74.network. May 17 00:29:12.946640 systemd-networkd[1605]: enp1s0f1np1: Link UP May 17 00:29:12.946978 systemd-networkd[1605]: enp1s0f1np1: Gained carrier May 17 00:29:12.948354 systemd[1]: Reached target network.target - Network. May 17 00:29:12.952470 systemd-networkd[1605]: bond0: Link UP May 17 00:29:12.952813 systemd-networkd[1605]: bond0: Gained carrier May 17 00:29:12.953032 systemd-timesyncd[1767]: Network configuration changed, trying to establish connection. May 17 00:29:12.957302 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:29:12.969340 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:29:12.979405 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 17 00:29:12.990394 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 17 00:29:13.002780 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 17 00:29:13.013564 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 17 00:29:13.025395 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 17 00:29:13.044350 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:29:13.044430 systemd[1]: Reached target paths.target - Path Units. May 17 00:29:13.057264 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex May 17 00:29:13.057342 kernel: bond0: active interface up! May 17 00:29:13.081340 systemd[1]: Reached target timers.target - Timer Units. May 17 00:29:13.090163 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 17 00:29:13.099982 systemd[1]: Starting docker.socket - Docker Socket for the API... May 17 00:29:13.112125 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 17 00:29:13.121575 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 17 00:29:13.131332 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:29:13.141282 systemd[1]: Reached target basic.target - Basic System. May 17 00:29:13.149296 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 17 00:29:13.149312 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 17 00:29:13.155325 systemd[1]: Starting containerd.service - containerd container runtime... May 17 00:29:13.173925 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 17 00:29:13.182249 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex May 17 00:29:13.191829 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 17 00:29:13.200891 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 17 00:29:13.204505 coreos-metadata[1772]: May 17 00:29:13.204 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 00:29:13.210798 dbus-daemon[1773]: [system] SELinux support is enabled May 17 00:29:13.211938 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 17 00:29:13.213810 jq[1776]: false May 17 00:29:13.222317 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 17 00:29:13.222861 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 17 00:29:13.230170 extend-filesystems[1778]: Found loop4 May 17 00:29:13.230170 extend-filesystems[1778]: Found loop5 May 17 00:29:13.288432 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks May 17 00:29:13.288453 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1463) May 17 00:29:13.288464 extend-filesystems[1778]: Found loop6 May 17 00:29:13.288464 extend-filesystems[1778]: Found loop7 May 17 00:29:13.288464 extend-filesystems[1778]: Found sda May 17 00:29:13.288464 extend-filesystems[1778]: Found sda1 May 17 00:29:13.288464 extend-filesystems[1778]: Found sda2 May 17 00:29:13.288464 extend-filesystems[1778]: Found sda3 May 17 00:29:13.288464 extend-filesystems[1778]: Found usr May 17 00:29:13.288464 extend-filesystems[1778]: Found sda4 May 17 00:29:13.288464 extend-filesystems[1778]: Found sda6 May 17 00:29:13.288464 extend-filesystems[1778]: Found sda7 May 17 00:29:13.288464 extend-filesystems[1778]: Found sda9 May 17 00:29:13.288464 extend-filesystems[1778]: Checking size of /dev/sda9 May 17 00:29:13.288464 extend-filesystems[1778]: Resized partition /dev/sda9 May 17 00:29:13.232897 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 17 00:29:13.426532 extend-filesystems[1788]: resize2fs 1.47.1 (20-May-2024) May 17 00:29:13.305369 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 17 00:29:13.325868 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 17 00:29:13.333773 systemd[1]: Starting systemd-logind.service - User Login Management... May 17 00:29:13.347127 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... May 17 00:29:13.369608 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 00:29:13.369953 systemd[1]: Starting update-engine.service - Update Engine... May 17 00:29:13.450732 update_engine[1803]: I20250517 00:29:13.411198 1803 main.cc:92] Flatcar Update Engine starting May 17 00:29:13.450732 update_engine[1803]: I20250517 00:29:13.411980 1803 update_check_scheduler.cc:74] Next update check in 9m11s May 17 00:29:13.397000 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 17 00:29:13.450906 jq[1804]: true May 17 00:29:13.407275 systemd-logind[1798]: Watching system buttons on /dev/input/event3 (Power Button) May 17 00:29:13.407285 systemd-logind[1798]: Watching system buttons on /dev/input/event2 (Sleep Button) May 17 00:29:13.407294 systemd-logind[1798]: Watching system buttons on /dev/input/event0 (HID 0557:2419) May 17 00:29:13.407677 systemd-logind[1798]: New seat seat0. May 17 00:29:13.418630 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 17 00:29:13.442641 systemd[1]: Started systemd-logind.service - User Login Management. May 17 00:29:13.469426 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:29:13.469521 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 17 00:29:13.469694 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:29:13.469780 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 17 00:29:13.479685 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:29:13.479766 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 17 00:29:13.493325 (ntainerd)[1808]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 17 00:29:13.494655 jq[1807]: true May 17 00:29:13.496597 dbus-daemon[1773]: [system] Successfully activated service 'org.freedesktop.systemd1' May 17 00:29:13.498540 tar[1806]: linux-amd64/helm May 17 00:29:13.504508 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. May 17 00:29:13.504605 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. May 17 00:29:13.507030 systemd[1]: Started update-engine.service - Update Engine. May 17 00:29:13.516856 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:29:13.516954 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 17 00:29:13.527340 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:29:13.527418 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 17 00:29:13.544613 bash[1835]: Updated "/home/core/.ssh/authorized_keys" May 17 00:29:13.561458 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 17 00:29:13.573143 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 17 00:29:13.580193 locksmithd[1837]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:29:13.586064 systemd[1]: Starting sshkeys.service... May 17 00:29:13.597340 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 17 00:29:13.609031 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 17 00:29:13.631002 coreos-metadata[1849]: May 17 00:29:13.630 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 00:29:13.666208 containerd[1808]: time="2025-05-17T00:29:13.666163345Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 17 00:29:13.684407 containerd[1808]: time="2025-05-17T00:29:13.684383830Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:29:13.685257 containerd[1808]: time="2025-05-17T00:29:13.685234864Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:29:13.685257 containerd[1808]: time="2025-05-17T00:29:13.685255418Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:29:13.685312 containerd[1808]: time="2025-05-17T00:29:13.685265133Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:29:13.685359 containerd[1808]: time="2025-05-17T00:29:13.685351073Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 17 00:29:13.685376 containerd[1808]: time="2025-05-17T00:29:13.685365578Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 17 00:29:13.685408 containerd[1808]: time="2025-05-17T00:29:13.685399571Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:29:13.685426 containerd[1808]: time="2025-05-17T00:29:13.685408729Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:29:13.685512 containerd[1808]: time="2025-05-17T00:29:13.685502856Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:29:13.685529 containerd[1808]: time="2025-05-17T00:29:13.685512345Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:29:13.685529 containerd[1808]: time="2025-05-17T00:29:13.685521287Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:29:13.685529 containerd[1808]: time="2025-05-17T00:29:13.685527094Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:29:13.685573 containerd[1808]: time="2025-05-17T00:29:13.685568525Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:29:13.685690 containerd[1808]: time="2025-05-17T00:29:13.685682691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:29:13.685746 containerd[1808]: time="2025-05-17T00:29:13.685738042Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:29:13.685764 containerd[1808]: time="2025-05-17T00:29:13.685746820Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:29:13.685795 containerd[1808]: time="2025-05-17T00:29:13.685788415Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:29:13.685825 containerd[1808]: time="2025-05-17T00:29:13.685817822Z" level=info msg="metadata content store policy set" policy=shared May 17 00:29:13.696718 containerd[1808]: time="2025-05-17T00:29:13.696704601Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:29:13.696759 containerd[1808]: time="2025-05-17T00:29:13.696728586Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:29:13.696759 containerd[1808]: time="2025-05-17T00:29:13.696738386Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 17 00:29:13.696759 containerd[1808]: time="2025-05-17T00:29:13.696747374Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 17 00:29:13.696759 containerd[1808]: time="2025-05-17T00:29:13.696755574Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:29:13.696831 containerd[1808]: time="2025-05-17T00:29:13.696823012Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:29:13.696958 containerd[1808]: time="2025-05-17T00:29:13.696950662Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:29:13.697014 containerd[1808]: time="2025-05-17T00:29:13.697004939Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 17 00:29:13.697033 containerd[1808]: time="2025-05-17T00:29:13.697014862Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 17 00:29:13.697033 containerd[1808]: time="2025-05-17T00:29:13.697022381Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 17 00:29:13.697033 containerd[1808]: time="2025-05-17T00:29:13.697030024Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:29:13.697074 containerd[1808]: time="2025-05-17T00:29:13.697037159Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:29:13.697074 containerd[1808]: time="2025-05-17T00:29:13.697045034Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:29:13.697074 containerd[1808]: time="2025-05-17T00:29:13.697052645Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:29:13.697074 containerd[1808]: time="2025-05-17T00:29:13.697059986Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:29:13.697134 containerd[1808]: time="2025-05-17T00:29:13.697073817Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:29:13.697134 containerd[1808]: time="2025-05-17T00:29:13.697081369Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:29:13.697134 containerd[1808]: time="2025-05-17T00:29:13.697087761Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:29:13.697134 containerd[1808]: time="2025-05-17T00:29:13.697099164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:29:13.697134 containerd[1808]: time="2025-05-17T00:29:13.697106609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:29:13.697134 containerd[1808]: time="2025-05-17T00:29:13.697113208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:29:13.697134 containerd[1808]: time="2025-05-17T00:29:13.697120510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:29:13.697134 containerd[1808]: time="2025-05-17T00:29:13.697129806Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:29:13.697252 containerd[1808]: time="2025-05-17T00:29:13.697137306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:29:13.697252 containerd[1808]: time="2025-05-17T00:29:13.697143872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:29:13.697252 containerd[1808]: time="2025-05-17T00:29:13.697152854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:29:13.697252 containerd[1808]: time="2025-05-17T00:29:13.697160404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 17 00:29:13.697252 containerd[1808]: time="2025-05-17T00:29:13.697168234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 17 00:29:13.697252 containerd[1808]: time="2025-05-17T00:29:13.697174708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:29:13.697252 containerd[1808]: time="2025-05-17T00:29:13.697181079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 17 00:29:13.697252 containerd[1808]: time="2025-05-17T00:29:13.697187867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:29:13.697252 containerd[1808]: time="2025-05-17T00:29:13.697196063Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 17 00:29:13.697252 containerd[1808]: time="2025-05-17T00:29:13.697208642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 17 00:29:13.697252 containerd[1808]: time="2025-05-17T00:29:13.697215420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:29:13.697252 containerd[1808]: time="2025-05-17T00:29:13.697221201Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:29:13.697252 containerd[1808]: time="2025-05-17T00:29:13.697249449Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:29:13.697433 containerd[1808]: time="2025-05-17T00:29:13.697260643Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 17 00:29:13.697433 containerd[1808]: time="2025-05-17T00:29:13.697266708Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:29:13.697433 containerd[1808]: time="2025-05-17T00:29:13.697273330Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 17 00:29:13.697433 containerd[1808]: time="2025-05-17T00:29:13.697278581Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:29:13.697433 containerd[1808]: time="2025-05-17T00:29:13.697285103Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 17 00:29:13.697433 containerd[1808]: time="2025-05-17T00:29:13.697290677Z" level=info msg="NRI interface is disabled by configuration." May 17 00:29:13.697433 containerd[1808]: time="2025-05-17T00:29:13.697296169Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:29:13.697536 containerd[1808]: time="2025-05-17T00:29:13.697444661Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:29:13.697536 containerd[1808]: time="2025-05-17T00:29:13.697477369Z" level=info msg="Connect containerd service" May 17 00:29:13.697536 containerd[1808]: time="2025-05-17T00:29:13.697495028Z" level=info msg="using legacy CRI server" May 17 00:29:13.697536 containerd[1808]: time="2025-05-17T00:29:13.697499225Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 17 00:29:13.697653 containerd[1808]: time="2025-05-17T00:29:13.697551097Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:29:13.697873 containerd[1808]: time="2025-05-17T00:29:13.697860299Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:29:13.697969 containerd[1808]: time="2025-05-17T00:29:13.697948250Z" level=info msg="Start subscribing containerd event" May 17 00:29:13.697991 containerd[1808]: time="2025-05-17T00:29:13.697978654Z" level=info msg="Start recovering state" May 17 00:29:13.698028 containerd[1808]: time="2025-05-17T00:29:13.698021164Z" level=info msg="Start event monitor" May 17 00:29:13.698046 containerd[1808]: time="2025-05-17T00:29:13.698023923Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:29:13.698046 containerd[1808]: time="2025-05-17T00:29:13.698034396Z" level=info msg="Start snapshots syncer" May 17 00:29:13.698046 containerd[1808]: time="2025-05-17T00:29:13.698039831Z" level=info msg="Start cni network conf syncer for default" May 17 00:29:13.698086 containerd[1808]: time="2025-05-17T00:29:13.698047208Z" level=info msg="Start streaming server" May 17 00:29:13.698086 containerd[1808]: time="2025-05-17T00:29:13.698052565Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:29:13.698086 containerd[1808]: time="2025-05-17T00:29:13.698082029Z" level=info msg="containerd successfully booted in 0.032646s" May 17 00:29:13.698172 systemd[1]: Started containerd.service - containerd container runtime. May 17 00:29:13.699232 sshd_keygen[1802]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:29:13.711561 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 17 00:29:13.741454 systemd[1]: Starting issuegen.service - Generate /run/issue... May 17 00:29:13.749604 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:29:13.749695 systemd[1]: Finished issuegen.service - Generate /run/issue. May 17 00:29:13.760680 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 17 00:29:13.765059 tar[1806]: linux-amd64/LICENSE May 17 00:29:13.765129 tar[1806]: linux-amd64/README.md May 17 00:29:13.780248 kernel: EXT4-fs (sda9): resized filesystem to 116605649 May 17 00:29:13.788720 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 17 00:29:13.801383 systemd[1]: Started getty@tty1.service - Getty on tty1. May 17 00:29:13.804894 extend-filesystems[1788]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 17 00:29:13.804894 extend-filesystems[1788]: old_desc_blocks = 1, new_desc_blocks = 56 May 17 00:29:13.804894 extend-filesystems[1788]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. May 17 00:29:13.838362 extend-filesystems[1778]: Resized filesystem in /dev/sda9 May 17 00:29:13.838362 extend-filesystems[1778]: Found sdb May 17 00:29:13.810108 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. May 17 00:29:13.820539 systemd[1]: Reached target getty.target - Login Prompts. May 17 00:29:13.857650 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:29:13.857735 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 17 00:29:13.882542 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 17 00:29:14.240378 systemd-networkd[1605]: bond0: Gained IPv6LL May 17 00:29:14.945665 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 17 00:29:14.957088 systemd[1]: Reached target network-online.target - Network is Online. May 17 00:29:14.986519 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:29:14.996995 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 17 00:29:15.015849 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 17 00:29:15.735944 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:29:15.747826 (kubelet)[1907]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:29:16.176616 kubelet[1907]: E0517 00:29:16.176526 1907 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:29:16.177694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:29:16.177771 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:29:17.522018 systemd-resolved[1623]: Clock change detected. Flushing caches. May 17 00:29:17.522069 systemd-timesyncd[1767]: Contacted time server 50.251.160.20:123 (0.flatcar.pool.ntp.org). May 17 00:29:17.522112 systemd-timesyncd[1767]: Initial clock synchronization to Sat 2025-05-17 00:29:17.521958 UTC. May 17 00:29:17.805425 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 17 00:29:17.835656 systemd[1]: Started sshd@0-147.75.203.231:22-147.75.109.163:50550.service - OpenSSH per-connection server daemon (147.75.109.163:50550). May 17 00:29:17.876395 sshd[1925]: Accepted publickey for core from 147.75.109.163 port 50550 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:29:17.877638 sshd[1925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:29:17.883067 systemd-logind[1798]: New session 1 of user core. May 17 00:29:17.884108 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 17 00:29:17.914675 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 17 00:29:17.927596 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 17 00:29:17.955618 systemd[1]: Starting user@500.service - User Manager for UID 500... May 17 00:29:17.966414 kernel: mlx5_core 0000:01:00.0: lag map: port 1:1 port 2:2 May 17 00:29:17.966530 kernel: mlx5_core 0000:01:00.0: shared_fdb:0 mode:queue_affinity May 17 00:29:18.002835 (systemd)[1929]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:29:18.078995 systemd[1929]: Queued start job for default target default.target. May 17 00:29:18.090935 systemd[1929]: Created slice app.slice - User Application Slice. May 17 00:29:18.090949 systemd[1929]: Reached target paths.target - Paths. May 17 00:29:18.090958 systemd[1929]: Reached target timers.target - Timers. May 17 00:29:18.091610 systemd[1929]: Starting dbus.socket - D-Bus User Message Bus Socket... May 17 00:29:18.097165 systemd[1929]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 17 00:29:18.097194 systemd[1929]: Reached target sockets.target - Sockets. May 17 00:29:18.097204 systemd[1929]: Reached target basic.target - Basic System. May 17 00:29:18.097225 systemd[1929]: Reached target default.target - Main User Target. May 17 00:29:18.097240 systemd[1929]: Startup finished in 91ms. May 17 00:29:18.097344 systemd[1]: Started user@500.service - User Manager for UID 500. May 17 00:29:18.107413 systemd[1]: Started session-1.scope - Session 1 of User core. May 17 00:29:18.171808 systemd[1]: Started sshd@1-147.75.203.231:22-147.75.109.163:59624.service - OpenSSH per-connection server daemon (147.75.109.163:59624). May 17 00:29:18.208945 sshd[1942]: Accepted publickey for core from 147.75.109.163 port 59624 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:29:18.209577 sshd[1942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:29:18.211970 systemd-logind[1798]: New session 2 of user core. May 17 00:29:18.230544 systemd[1]: Started session-2.scope - Session 2 of User core. May 17 00:29:18.288442 sshd[1942]: pam_unix(sshd:session): session closed for user core May 17 00:29:18.300048 systemd[1]: sshd@1-147.75.203.231:22-147.75.109.163:59624.service: Deactivated successfully. May 17 00:29:18.300727 systemd[1]: session-2.scope: Deactivated successfully. May 17 00:29:18.301375 systemd-logind[1798]: Session 2 logged out. Waiting for processes to exit. May 17 00:29:18.301991 systemd[1]: Started sshd@2-147.75.203.231:22-147.75.109.163:59638.service - OpenSSH per-connection server daemon (147.75.109.163:59638). May 17 00:29:18.313145 systemd-logind[1798]: Removed session 2. May 17 00:29:18.328224 coreos-metadata[1849]: May 17 00:29:18.328 INFO Fetch successful May 17 00:29:18.338808 sshd[1949]: Accepted publickey for core from 147.75.109.163 port 59638 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:29:18.339488 sshd[1949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:29:18.342083 systemd-logind[1798]: New session 3 of user core. May 17 00:29:18.351514 systemd[1]: Started session-3.scope - Session 3 of User core. May 17 00:29:18.361718 unknown[1849]: wrote ssh authorized keys file for user: core May 17 00:29:18.393733 update-ssh-keys[1951]: Updated "/home/core/.ssh/authorized_keys" May 17 00:29:18.394058 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 17 00:29:18.406136 systemd[1]: Finished sshkeys.service. May 17 00:29:18.414300 sshd[1949]: pam_unix(sshd:session): session closed for user core May 17 00:29:18.415799 systemd[1]: sshd@2-147.75.203.231:22-147.75.109.163:59638.service: Deactivated successfully. May 17 00:29:18.416521 systemd[1]: session-3.scope: Deactivated successfully. May 17 00:29:18.417174 systemd-logind[1798]: Session 3 logged out. Waiting for processes to exit. May 17 00:29:18.417791 systemd-logind[1798]: Removed session 3. May 17 00:29:18.974843 coreos-metadata[1772]: May 17 00:29:18.974 INFO Fetch successful May 17 00:29:19.019698 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 17 00:29:19.030595 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... May 17 00:29:19.467646 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. May 17 00:29:19.481966 systemd[1]: Reached target multi-user.target - Multi-User System. May 17 00:29:19.492193 systemd[1]: Startup finished in 2.669s (kernel) + 24.269s (initrd) + 11.081s (userspace) = 38.021s. May 17 00:29:19.512122 login[1888]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 17 00:29:19.515581 systemd-logind[1798]: New session 4 of user core. May 17 00:29:19.542582 systemd[1]: Started session-4.scope - Session 4 of User core. May 17 00:29:19.550333 login[1886]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 17 00:29:19.552911 systemd-logind[1798]: New session 5 of user core. May 17 00:29:19.553683 systemd[1]: Started session-5.scope - Session 5 of User core. May 17 00:29:27.471290 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:29:27.483641 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:29:27.734969 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:29:27.737083 (kubelet)[2000]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:29:27.758565 kubelet[2000]: E0517 00:29:27.758511 2000 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:29:27.760793 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:29:27.760885 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:29:28.441753 systemd[1]: Started sshd@3-147.75.203.231:22-147.75.109.163:44852.service - OpenSSH per-connection server daemon (147.75.109.163:44852). May 17 00:29:28.469336 sshd[2020]: Accepted publickey for core from 147.75.109.163 port 44852 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:29:28.470063 sshd[2020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:29:28.472751 systemd-logind[1798]: New session 6 of user core. May 17 00:29:28.482619 systemd[1]: Started session-6.scope - Session 6 of User core. May 17 00:29:28.533419 sshd[2020]: pam_unix(sshd:session): session closed for user core May 17 00:29:28.549149 systemd[1]: sshd@3-147.75.203.231:22-147.75.109.163:44852.service: Deactivated successfully. May 17 00:29:28.549978 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:29:28.550788 systemd-logind[1798]: Session 6 logged out. Waiting for processes to exit. May 17 00:29:28.551460 systemd[1]: Started sshd@4-147.75.203.231:22-147.75.109.163:44854.service - OpenSSH per-connection server daemon (147.75.109.163:44854). May 17 00:29:28.552082 systemd-logind[1798]: Removed session 6. May 17 00:29:28.582926 sshd[2027]: Accepted publickey for core from 147.75.109.163 port 44854 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:29:28.583617 sshd[2027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:29:28.586254 systemd-logind[1798]: New session 7 of user core. May 17 00:29:28.599627 systemd[1]: Started session-7.scope - Session 7 of User core. May 17 00:29:28.647390 sshd[2027]: pam_unix(sshd:session): session closed for user core May 17 00:29:28.670730 systemd[1]: sshd@4-147.75.203.231:22-147.75.109.163:44854.service: Deactivated successfully. May 17 00:29:28.674278 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:29:28.677572 systemd-logind[1798]: Session 7 logged out. Waiting for processes to exit. May 17 00:29:28.697157 systemd[1]: Started sshd@5-147.75.203.231:22-147.75.109.163:44866.service - OpenSSH per-connection server daemon (147.75.109.163:44866). May 17 00:29:28.699634 systemd-logind[1798]: Removed session 7. May 17 00:29:28.747710 sshd[2034]: Accepted publickey for core from 147.75.109.163 port 44866 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:29:28.748329 sshd[2034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:29:28.750957 systemd-logind[1798]: New session 8 of user core. May 17 00:29:28.768929 systemd[1]: Started session-8.scope - Session 8 of User core. May 17 00:29:28.829661 sshd[2034]: pam_unix(sshd:session): session closed for user core May 17 00:29:28.853617 systemd[1]: sshd@5-147.75.203.231:22-147.75.109.163:44866.service: Deactivated successfully. May 17 00:29:28.854330 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:29:28.855040 systemd-logind[1798]: Session 8 logged out. Waiting for processes to exit. May 17 00:29:28.855695 systemd[1]: Started sshd@6-147.75.203.231:22-147.75.109.163:44870.service - OpenSSH per-connection server daemon (147.75.109.163:44870). May 17 00:29:28.856137 systemd-logind[1798]: Removed session 8. May 17 00:29:28.886456 sshd[2042]: Accepted publickey for core from 147.75.109.163 port 44870 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:29:28.887234 sshd[2042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:29:28.890130 systemd-logind[1798]: New session 9 of user core. May 17 00:29:28.911666 systemd[1]: Started session-9.scope - Session 9 of User core. May 17 00:29:28.976830 sudo[2045]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 17 00:29:28.976980 sudo[2045]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:29:28.995114 sudo[2045]: pam_unix(sudo:session): session closed for user root May 17 00:29:28.996162 sshd[2042]: pam_unix(sshd:session): session closed for user core May 17 00:29:29.016808 systemd[1]: sshd@6-147.75.203.231:22-147.75.109.163:44870.service: Deactivated successfully. May 17 00:29:29.018026 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:29:29.019157 systemd-logind[1798]: Session 9 logged out. Waiting for processes to exit. May 17 00:29:29.020341 systemd[1]: Started sshd@7-147.75.203.231:22-147.75.109.163:44874.service - OpenSSH per-connection server daemon (147.75.109.163:44874). May 17 00:29:29.021311 systemd-logind[1798]: Removed session 9. May 17 00:29:29.099256 sshd[2050]: Accepted publickey for core from 147.75.109.163 port 44874 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:29:29.101229 sshd[2050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:29:29.107555 systemd-logind[1798]: New session 10 of user core. May 17 00:29:29.127918 systemd[1]: Started session-10.scope - Session 10 of User core. May 17 00:29:29.186055 sudo[2054]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 17 00:29:29.186201 sudo[2054]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:29:29.188181 sudo[2054]: pam_unix(sudo:session): session closed for user root May 17 00:29:29.190712 sudo[2053]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 17 00:29:29.190856 sudo[2053]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:29:29.205690 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 17 00:29:29.206755 auditctl[2057]: No rules May 17 00:29:29.206976 systemd[1]: audit-rules.service: Deactivated successfully. May 17 00:29:29.207096 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 17 00:29:29.208553 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:29:29.224291 augenrules[2075]: No rules May 17 00:29:29.224692 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:29:29.225242 sudo[2053]: pam_unix(sudo:session): session closed for user root May 17 00:29:29.226107 sshd[2050]: pam_unix(sshd:session): session closed for user core May 17 00:29:29.228060 systemd[1]: sshd@7-147.75.203.231:22-147.75.109.163:44874.service: Deactivated successfully. May 17 00:29:29.228864 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:29:29.229265 systemd-logind[1798]: Session 10 logged out. Waiting for processes to exit. May 17 00:29:29.230231 systemd[1]: Started sshd@8-147.75.203.231:22-147.75.109.163:44884.service - OpenSSH per-connection server daemon (147.75.109.163:44884). May 17 00:29:29.230838 systemd-logind[1798]: Removed session 10. May 17 00:29:29.278717 sshd[2083]: Accepted publickey for core from 147.75.109.163 port 44884 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:29:29.279661 sshd[2083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:29:29.283123 systemd-logind[1798]: New session 11 of user core. May 17 00:29:29.293621 systemd[1]: Started session-11.scope - Session 11 of User core. May 17 00:29:29.356606 sudo[2086]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:29:29.357504 sudo[2086]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:29:29.707755 systemd[1]: Starting docker.service - Docker Application Container Engine... May 17 00:29:29.707808 (dockerd)[2110]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 17 00:29:29.956320 dockerd[2110]: time="2025-05-17T00:29:29.956290874Z" level=info msg="Starting up" May 17 00:29:30.038408 dockerd[2110]: time="2025-05-17T00:29:30.038382229Z" level=info msg="Loading containers: start." May 17 00:29:30.117375 kernel: Initializing XFRM netlink socket May 17 00:29:30.174701 systemd-networkd[1605]: docker0: Link UP May 17 00:29:30.193311 dockerd[2110]: time="2025-05-17T00:29:30.193261596Z" level=info msg="Loading containers: done." May 17 00:29:30.203354 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1597210948-merged.mount: Deactivated successfully. May 17 00:29:30.203506 dockerd[2110]: time="2025-05-17T00:29:30.203481538Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:29:30.203539 dockerd[2110]: time="2025-05-17T00:29:30.203530083Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 17 00:29:30.203590 dockerd[2110]: time="2025-05-17T00:29:30.203580825Z" level=info msg="Daemon has completed initialization" May 17 00:29:30.218710 dockerd[2110]: time="2025-05-17T00:29:30.218635172Z" level=info msg="API listen on /run/docker.sock" May 17 00:29:30.218719 systemd[1]: Started docker.service - Docker Application Container Engine. May 17 00:29:30.985977 containerd[1808]: time="2025-05-17T00:29:30.985861320Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 17 00:29:31.738983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3432652154.mount: Deactivated successfully. May 17 00:29:32.440885 containerd[1808]: time="2025-05-17T00:29:32.440830711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:29:32.441103 containerd[1808]: time="2025-05-17T00:29:32.440911713Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.9: active requests=0, bytes read=28078845" May 17 00:29:32.441458 containerd[1808]: time="2025-05-17T00:29:32.441399334Z" level=info msg="ImageCreate event name:\"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:29:32.443210 containerd[1808]: time="2025-05-17T00:29:32.443194469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:29:32.443710 containerd[1808]: time="2025-05-17T00:29:32.443697976Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.9\" with image id \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\", size \"28075645\" in 1.457766468s" May 17 00:29:32.443743 containerd[1808]: time="2025-05-17T00:29:32.443714681Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\"" May 17 00:29:32.444266 containerd[1808]: time="2025-05-17T00:29:32.444126322Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 17 00:29:33.517613 containerd[1808]: time="2025-05-17T00:29:33.517586378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:29:33.517829 containerd[1808]: time="2025-05-17T00:29:33.517815990Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.9: active requests=0, bytes read=24713522" May 17 00:29:33.518157 containerd[1808]: time="2025-05-17T00:29:33.518146266Z" level=info msg="ImageCreate event name:\"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:29:33.520014 containerd[1808]: time="2025-05-17T00:29:33.519973385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:29:33.520511 containerd[1808]: time="2025-05-17T00:29:33.520468815Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.9\" with image id \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\", size \"26315362\" in 1.076197722s" May 17 00:29:33.520511 containerd[1808]: time="2025-05-17T00:29:33.520487091Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\"" May 17 00:29:33.520782 containerd[1808]: time="2025-05-17T00:29:33.520753165Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 17 00:29:34.377801 containerd[1808]: time="2025-05-17T00:29:34.377774242Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:29:34.378033 containerd[1808]: time="2025-05-17T00:29:34.378009975Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.9: active requests=0, bytes read=18784311" May 17 00:29:34.378449 containerd[1808]: time="2025-05-17T00:29:34.378415628Z" level=info msg="ImageCreate event name:\"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:29:34.380014 containerd[1808]: time="2025-05-17T00:29:34.379977109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:29:34.380644 containerd[1808]: time="2025-05-17T00:29:34.380628398Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.9\" with image id \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\", size \"20386169\" in 859.85909ms" May 17 00:29:34.380687 containerd[1808]: time="2025-05-17T00:29:34.380646276Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\"" May 17 00:29:34.380930 containerd[1808]: time="2025-05-17T00:29:34.380919027Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 17 00:29:35.215113 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1945000170.mount: Deactivated successfully. May 17 00:29:35.399310 containerd[1808]: time="2025-05-17T00:29:35.399284766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:29:35.399532 containerd[1808]: time="2025-05-17T00:29:35.399515003Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.9: active requests=0, bytes read=30355623" May 17 00:29:35.399858 containerd[1808]: time="2025-05-17T00:29:35.399817400Z" level=info msg="ImageCreate event name:\"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:29:35.400793 containerd[1808]: time="2025-05-17T00:29:35.400751338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:29:35.401160 containerd[1808]: time="2025-05-17T00:29:35.401119973Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.9\" with image id \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\", repo tag \"registry.k8s.io/kube-proxy:v1.31.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\", size \"30354642\" in 1.020185301s" May 17 00:29:35.401160 containerd[1808]: time="2025-05-17T00:29:35.401135324Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\"" May 17 00:29:35.401488 containerd[1808]: time="2025-05-17T00:29:35.401475785Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 00:29:35.848721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2709474677.mount: Deactivated successfully. May 17 00:29:36.424393 containerd[1808]: time="2025-05-17T00:29:36.424338072Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:29:36.424705 containerd[1808]: time="2025-05-17T00:29:36.424657474Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 17 00:29:36.424983 containerd[1808]: time="2025-05-17T00:29:36.424944513Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:29:36.426585 containerd[1808]: time="2025-05-17T00:29:36.426545565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:29:36.427221 containerd[1808]: time="2025-05-17T00:29:36.427179389Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.025688122s" May 17 00:29:36.427221 containerd[1808]: time="2025-05-17T00:29:36.427195290Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 17 00:29:36.427497 containerd[1808]: time="2025-05-17T00:29:36.427458410Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:29:36.924526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1972769021.mount: Deactivated successfully. May 17 00:29:36.925762 containerd[1808]: time="2025-05-17T00:29:36.925709746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:29:36.925899 containerd[1808]: time="2025-05-17T00:29:36.925856315Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 17 00:29:36.926305 containerd[1808]: time="2025-05-17T00:29:36.926271419Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:29:36.928145 containerd[1808]: time="2025-05-17T00:29:36.928098765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:29:36.928687 containerd[1808]: time="2025-05-17T00:29:36.928641164Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 501.168045ms" May 17 00:29:36.928687 containerd[1808]: time="2025-05-17T00:29:36.928656672Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 17 00:29:36.929026 containerd[1808]: time="2025-05-17T00:29:36.929015110Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 17 00:29:37.432436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1532792508.mount: Deactivated successfully. May 17 00:29:37.969989 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 00:29:37.983855 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:29:38.284367 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:29:38.286662 (kubelet)[2465]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:29:38.305895 kubelet[2465]: E0517 00:29:38.305871 2465 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:29:38.306941 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:29:38.307037 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:29:38.546697 containerd[1808]: time="2025-05-17T00:29:38.546607653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:29:38.546886 containerd[1808]: time="2025-05-17T00:29:38.546783101Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 17 00:29:38.547278 containerd[1808]: time="2025-05-17T00:29:38.547239004Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:29:38.549280 containerd[1808]: time="2025-05-17T00:29:38.549237402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:29:38.549846 containerd[1808]: time="2025-05-17T00:29:38.549805017Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 1.620776189s" May 17 00:29:38.549846 containerd[1808]: time="2025-05-17T00:29:38.549821709Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 17 00:29:39.920536 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:29:39.937779 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:29:39.950123 systemd[1]: Reloading requested from client PID 2539 ('systemctl') (unit session-11.scope)... May 17 00:29:39.950131 systemd[1]: Reloading... May 17 00:29:39.988405 zram_generator::config[2578]: No configuration found. May 17 00:29:40.054119 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:29:40.114536 systemd[1]: Reloading finished in 164 ms. May 17 00:29:40.159160 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 17 00:29:40.159202 systemd[1]: kubelet.service: Failed with result 'signal'. May 17 00:29:40.159309 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:29:40.168772 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:29:40.421169 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:29:40.423438 (kubelet)[2642]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:29:40.447421 kubelet[2642]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:29:40.447421 kubelet[2642]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:29:40.447421 kubelet[2642]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:29:40.447698 kubelet[2642]: I0517 00:29:40.447453 2642 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:29:40.575885 kubelet[2642]: I0517 00:29:40.575873 2642 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:29:40.575885 kubelet[2642]: I0517 00:29:40.575884 2642 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:29:40.576033 kubelet[2642]: I0517 00:29:40.575998 2642 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:29:40.591583 kubelet[2642]: E0517 00:29:40.591542 2642 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://147.75.203.231:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 147.75.203.231:6443: connect: connection refused" logger="UnhandledError" May 17 00:29:40.592485 kubelet[2642]: I0517 00:29:40.592448 2642 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:29:40.596261 kubelet[2642]: E0517 00:29:40.596217 2642 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:29:40.596261 kubelet[2642]: I0517 00:29:40.596230 2642 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:29:40.606839 kubelet[2642]: I0517 00:29:40.606801 2642 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:29:40.607349 kubelet[2642]: I0517 00:29:40.607313 2642 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:29:40.607432 kubelet[2642]: I0517 00:29:40.607382 2642 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:29:40.607549 kubelet[2642]: I0517 00:29:40.607432 2642 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-n-65a4af4639","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:29:40.607549 kubelet[2642]: I0517 00:29:40.607529 2642 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:29:40.607549 kubelet[2642]: I0517 00:29:40.607535 2642 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:29:40.607646 kubelet[2642]: I0517 00:29:40.607590 2642 state_mem.go:36] "Initialized new in-memory state store" May 17 00:29:40.609656 kubelet[2642]: I0517 00:29:40.609620 2642 kubelet.go:408] "Attempting to sync node with API server" May 17 00:29:40.609656 kubelet[2642]: I0517 00:29:40.609630 2642 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:29:40.609656 kubelet[2642]: I0517 00:29:40.609647 2642 kubelet.go:314] "Adding apiserver pod source" May 17 00:29:40.609656 kubelet[2642]: I0517 00:29:40.609657 2642 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:29:40.611867 kubelet[2642]: I0517 00:29:40.611840 2642 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:29:40.612129 kubelet[2642]: I0517 00:29:40.612121 2642 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:29:40.612165 kubelet[2642]: W0517 00:29:40.612154 2642 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:29:40.612775 kubelet[2642]: W0517 00:29:40.612733 2642 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.75.203.231:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-n-65a4af4639&limit=500&resourceVersion=0": dial tcp 147.75.203.231:6443: connect: connection refused May 17 00:29:40.612808 kubelet[2642]: E0517 00:29:40.612783 2642 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.75.203.231:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-n-65a4af4639&limit=500&resourceVersion=0\": dial tcp 147.75.203.231:6443: connect: connection refused" logger="UnhandledError" May 17 00:29:40.612919 kubelet[2642]: W0517 00:29:40.612869 2642 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.75.203.231:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.75.203.231:6443: connect: connection refused May 17 00:29:40.612919 kubelet[2642]: E0517 00:29:40.612906 2642 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.75.203.231:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.75.203.231:6443: connect: connection refused" logger="UnhandledError" May 17 00:29:40.613629 kubelet[2642]: I0517 00:29:40.613621 2642 server.go:1274] "Started kubelet" May 17 00:29:40.613734 kubelet[2642]: I0517 00:29:40.613703 2642 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:29:40.613765 kubelet[2642]: I0517 00:29:40.613741 2642 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:29:40.613955 kubelet[2642]: I0517 00:29:40.613945 2642 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:29:40.614493 kubelet[2642]: I0517 00:29:40.614485 2642 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:29:40.614534 kubelet[2642]: I0517 00:29:40.614493 2642 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:29:40.614534 kubelet[2642]: I0517 00:29:40.614521 2642 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:29:40.614637 kubelet[2642]: E0517 00:29:40.614535 2642 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-65a4af4639\" not found" May 17 00:29:40.614637 kubelet[2642]: I0517 00:29:40.614617 2642 reconciler.go:26] "Reconciler: start to sync state" May 17 00:29:40.614922 kubelet[2642]: I0517 00:29:40.614898 2642 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:29:40.619594 kubelet[2642]: I0517 00:29:40.619575 2642 server.go:449] "Adding debug handlers to kubelet server" May 17 00:29:40.619755 kubelet[2642]: E0517 00:29:40.619728 2642 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.203.231:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-n-65a4af4639?timeout=10s\": dial tcp 147.75.203.231:6443: connect: connection refused" interval="200ms" May 17 00:29:40.619815 kubelet[2642]: E0517 00:29:40.619800 2642 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:29:40.619870 kubelet[2642]: I0517 00:29:40.619864 2642 factory.go:221] Registration of the systemd container factory successfully May 17 00:29:40.620009 kubelet[2642]: I0517 00:29:40.619974 2642 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:29:40.620054 kubelet[2642]: W0517 00:29:40.619983 2642 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.75.203.231:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.203.231:6443: connect: connection refused May 17 00:29:40.620054 kubelet[2642]: E0517 00:29:40.620032 2642 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.75.203.231:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.75.203.231:6443: connect: connection refused" logger="UnhandledError" May 17 00:29:40.621044 kubelet[2642]: I0517 00:29:40.621032 2642 factory.go:221] Registration of the containerd container factory successfully May 17 00:29:40.622037 kubelet[2642]: E0517 00:29:40.621060 2642 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.75.203.231:6443/api/v1/namespaces/default/events\": dial tcp 147.75.203.231:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.3-n-65a4af4639.1840290a87266e2b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.3-n-65a4af4639,UID:ci-4081.3.3-n-65a4af4639,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.3-n-65a4af4639,},FirstTimestamp:2025-05-17 00:29:40.613606955 +0000 UTC m=+0.188293258,LastTimestamp:2025-05-17 00:29:40.613606955 +0000 UTC m=+0.188293258,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.3-n-65a4af4639,}" May 17 00:29:40.627915 kubelet[2642]: I0517 00:29:40.627853 2642 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:29:40.628528 kubelet[2642]: I0517 00:29:40.628490 2642 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:29:40.628528 kubelet[2642]: I0517 00:29:40.628503 2642 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:29:40.628528 kubelet[2642]: I0517 00:29:40.628514 2642 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:29:40.628592 kubelet[2642]: E0517 00:29:40.628535 2642 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:29:40.628864 kubelet[2642]: W0517 00:29:40.628819 2642 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.75.203.231:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.203.231:6443: connect: connection refused May 17 00:29:40.628933 kubelet[2642]: E0517 00:29:40.628867 2642 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.75.203.231:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.75.203.231:6443: connect: connection refused" logger="UnhandledError" May 17 00:29:40.654320 kubelet[2642]: I0517 00:29:40.654308 2642 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:29:40.654320 kubelet[2642]: I0517 00:29:40.654317 2642 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:29:40.654404 kubelet[2642]: I0517 00:29:40.654328 2642 state_mem.go:36] "Initialized new in-memory state store" May 17 00:29:40.661151 kubelet[2642]: I0517 00:29:40.661110 2642 policy_none.go:49] "None policy: Start" May 17 00:29:40.661572 kubelet[2642]: I0517 00:29:40.661528 2642 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:29:40.661572 kubelet[2642]: I0517 00:29:40.661553 2642 state_mem.go:35] "Initializing new in-memory state store" May 17 00:29:40.666630 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 17 00:29:40.674094 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 17 00:29:40.675986 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 17 00:29:40.690042 kubelet[2642]: I0517 00:29:40.690029 2642 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:29:40.690160 kubelet[2642]: I0517 00:29:40.690151 2642 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:29:40.690194 kubelet[2642]: I0517 00:29:40.690160 2642 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:29:40.690292 kubelet[2642]: I0517 00:29:40.690283 2642 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:29:40.690987 kubelet[2642]: E0517 00:29:40.690974 2642 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.3-n-65a4af4639\" not found" May 17 00:29:40.752161 systemd[1]: Created slice kubepods-burstable-pod77ffc6f051d14e85fefe5f258f59f00f.slice - libcontainer container kubepods-burstable-pod77ffc6f051d14e85fefe5f258f59f00f.slice. May 17 00:29:40.794490 kubelet[2642]: I0517 00:29:40.794429 2642 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.3-n-65a4af4639" May 17 00:29:40.795229 kubelet[2642]: E0517 00:29:40.795151 2642 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.75.203.231:6443/api/v1/nodes\": dial tcp 147.75.203.231:6443: connect: connection refused" node="ci-4081.3.3-n-65a4af4639" May 17 00:29:40.796829 systemd[1]: Created slice kubepods-burstable-poda143b24f9c1821ab0bbca96848324a62.slice - libcontainer container kubepods-burstable-poda143b24f9c1821ab0bbca96848324a62.slice. May 17 00:29:40.821153 kubelet[2642]: E0517 00:29:40.821074 2642 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.203.231:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-n-65a4af4639?timeout=10s\": dial tcp 147.75.203.231:6443: connect: connection refused" interval="400ms" May 17 00:29:40.824189 systemd[1]: Created slice kubepods-burstable-podf9c40c91615c4b7a49f8ee1746e4c378.slice - libcontainer container kubepods-burstable-podf9c40c91615c4b7a49f8ee1746e4c378.slice. May 17 00:29:40.916277 kubelet[2642]: I0517 00:29:40.916189 2642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/77ffc6f051d14e85fefe5f258f59f00f-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-n-65a4af4639\" (UID: \"77ffc6f051d14e85fefe5f258f59f00f\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-65a4af4639" May 17 00:29:40.916277 kubelet[2642]: I0517 00:29:40.916286 2642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a143b24f9c1821ab0bbca96848324a62-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-n-65a4af4639\" (UID: \"a143b24f9c1821ab0bbca96848324a62\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-65a4af4639" May 17 00:29:40.916700 kubelet[2642]: I0517 00:29:40.916357 2642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a143b24f9c1821ab0bbca96848324a62-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-n-65a4af4639\" (UID: \"a143b24f9c1821ab0bbca96848324a62\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-65a4af4639" May 17 00:29:40.916700 kubelet[2642]: I0517 00:29:40.916463 2642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f9c40c91615c4b7a49f8ee1746e4c378-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-n-65a4af4639\" (UID: \"f9c40c91615c4b7a49f8ee1746e4c378\") " pod="kube-system/kube-scheduler-ci-4081.3.3-n-65a4af4639" May 17 00:29:40.916700 kubelet[2642]: I0517 00:29:40.916514 2642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/77ffc6f051d14e85fefe5f258f59f00f-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-n-65a4af4639\" (UID: \"77ffc6f051d14e85fefe5f258f59f00f\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-65a4af4639" May 17 00:29:40.916700 kubelet[2642]: I0517 00:29:40.916561 2642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/77ffc6f051d14e85fefe5f258f59f00f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-n-65a4af4639\" (UID: \"77ffc6f051d14e85fefe5f258f59f00f\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-65a4af4639" May 17 00:29:40.916700 kubelet[2642]: I0517 00:29:40.916608 2642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a143b24f9c1821ab0bbca96848324a62-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-n-65a4af4639\" (UID: \"a143b24f9c1821ab0bbca96848324a62\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-65a4af4639" May 17 00:29:40.917161 kubelet[2642]: I0517 00:29:40.916657 2642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a143b24f9c1821ab0bbca96848324a62-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-n-65a4af4639\" (UID: \"a143b24f9c1821ab0bbca96848324a62\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-65a4af4639" May 17 00:29:40.917161 kubelet[2642]: I0517 00:29:40.916759 2642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a143b24f9c1821ab0bbca96848324a62-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-n-65a4af4639\" (UID: \"a143b24f9c1821ab0bbca96848324a62\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-65a4af4639" May 17 00:29:41.000121 kubelet[2642]: I0517 00:29:41.000019 2642 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.3-n-65a4af4639" May 17 00:29:41.000852 kubelet[2642]: E0517 00:29:41.000741 2642 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.75.203.231:6443/api/v1/nodes\": dial tcp 147.75.203.231:6443: connect: connection refused" node="ci-4081.3.3-n-65a4af4639" May 17 00:29:41.089809 containerd[1808]: time="2025-05-17T00:29:41.089682630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-n-65a4af4639,Uid:77ffc6f051d14e85fefe5f258f59f00f,Namespace:kube-system,Attempt:0,}" May 17 00:29:41.102376 containerd[1808]: time="2025-05-17T00:29:41.102332104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-n-65a4af4639,Uid:a143b24f9c1821ab0bbca96848324a62,Namespace:kube-system,Attempt:0,}" May 17 00:29:41.130228 containerd[1808]: time="2025-05-17T00:29:41.130182004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-n-65a4af4639,Uid:f9c40c91615c4b7a49f8ee1746e4c378,Namespace:kube-system,Attempt:0,}" May 17 00:29:41.222149 kubelet[2642]: E0517 00:29:41.222089 2642 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.203.231:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-n-65a4af4639?timeout=10s\": dial tcp 147.75.203.231:6443: connect: connection refused" interval="800ms" May 17 00:29:41.402486 kubelet[2642]: I0517 00:29:41.402379 2642 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.3-n-65a4af4639" May 17 00:29:41.402679 kubelet[2642]: E0517 00:29:41.402663 2642 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.75.203.231:6443/api/v1/nodes\": dial tcp 147.75.203.231:6443: connect: connection refused" node="ci-4081.3.3-n-65a4af4639" May 17 00:29:41.589642 kubelet[2642]: W0517 00:29:41.589566 2642 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.75.203.231:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.75.203.231:6443: connect: connection refused May 17 00:29:41.589642 kubelet[2642]: E0517 00:29:41.589617 2642 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.75.203.231:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.75.203.231:6443: connect: connection refused" logger="UnhandledError" May 17 00:29:41.598480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount487267373.mount: Deactivated successfully. May 17 00:29:41.600424 containerd[1808]: time="2025-05-17T00:29:41.600375590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:29:41.601040 containerd[1808]: time="2025-05-17T00:29:41.601005951Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:29:41.601225 containerd[1808]: time="2025-05-17T00:29:41.601204974Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:29:41.601628 containerd[1808]: time="2025-05-17T00:29:41.601582747Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:29:41.601908 containerd[1808]: time="2025-05-17T00:29:41.601842573Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:29:41.602247 containerd[1808]: time="2025-05-17T00:29:41.602211937Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:29:41.602436 containerd[1808]: time="2025-05-17T00:29:41.602371547Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 17 00:29:41.603553 containerd[1808]: time="2025-05-17T00:29:41.603515952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:29:41.605172 containerd[1808]: time="2025-05-17T00:29:41.605131728Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 502.757567ms" May 17 00:29:41.605945 containerd[1808]: time="2025-05-17T00:29:41.605903217Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 475.681823ms" May 17 00:29:41.607067 containerd[1808]: time="2025-05-17T00:29:41.607011030Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 517.183536ms" May 17 00:29:41.636566 kubelet[2642]: W0517 00:29:41.636503 2642 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.75.203.231:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.203.231:6443: connect: connection refused May 17 00:29:41.636566 kubelet[2642]: E0517 00:29:41.636544 2642 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.75.203.231:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.75.203.231:6443: connect: connection refused" logger="UnhandledError" May 17 00:29:41.726590 containerd[1808]: time="2025-05-17T00:29:41.726537381Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:29:41.726590 containerd[1808]: time="2025-05-17T00:29:41.726567386Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:29:41.726590 containerd[1808]: time="2025-05-17T00:29:41.726574600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:29:41.726748 containerd[1808]: time="2025-05-17T00:29:41.726604165Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:29:41.726748 containerd[1808]: time="2025-05-17T00:29:41.726621245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:29:41.726748 containerd[1808]: time="2025-05-17T00:29:41.726626437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:29:41.726748 containerd[1808]: time="2025-05-17T00:29:41.726633926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:29:41.726748 containerd[1808]: time="2025-05-17T00:29:41.726649350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:29:41.726748 containerd[1808]: time="2025-05-17T00:29:41.726672210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:29:41.726748 containerd[1808]: time="2025-05-17T00:29:41.726671952Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:29:41.726748 containerd[1808]: time="2025-05-17T00:29:41.726679313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:29:41.726748 containerd[1808]: time="2025-05-17T00:29:41.726716618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:29:41.751706 systemd[1]: Started cri-containerd-31b99287494e817157c62a412942c0b4e5400f1f3df677b4735abe58fc48d694.scope - libcontainer container 31b99287494e817157c62a412942c0b4e5400f1f3df677b4735abe58fc48d694. May 17 00:29:41.752495 systemd[1]: Started cri-containerd-b6b9fb6517c075415b70ddfb17727d322e95681766e96a4b1ace17c613d67bcd.scope - libcontainer container b6b9fb6517c075415b70ddfb17727d322e95681766e96a4b1ace17c613d67bcd. May 17 00:29:41.753310 systemd[1]: Started cri-containerd-f758baed8a2eb0126ea72fbfc78a0942968dc21016fbd37de16818f9580ccffd.scope - libcontainer container f758baed8a2eb0126ea72fbfc78a0942968dc21016fbd37de16818f9580ccffd. May 17 00:29:41.775751 containerd[1808]: time="2025-05-17T00:29:41.775725990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-n-65a4af4639,Uid:77ffc6f051d14e85fefe5f258f59f00f,Namespace:kube-system,Attempt:0,} returns sandbox id \"31b99287494e817157c62a412942c0b4e5400f1f3df677b4735abe58fc48d694\"" May 17 00:29:41.777286 containerd[1808]: time="2025-05-17T00:29:41.777267119Z" level=info msg="CreateContainer within sandbox \"31b99287494e817157c62a412942c0b4e5400f1f3df677b4735abe58fc48d694\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:29:41.777360 containerd[1808]: time="2025-05-17T00:29:41.777343049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-n-65a4af4639,Uid:f9c40c91615c4b7a49f8ee1746e4c378,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6b9fb6517c075415b70ddfb17727d322e95681766e96a4b1ace17c613d67bcd\"" May 17 00:29:41.777743 containerd[1808]: time="2025-05-17T00:29:41.777730825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-n-65a4af4639,Uid:a143b24f9c1821ab0bbca96848324a62,Namespace:kube-system,Attempt:0,} returns sandbox id \"f758baed8a2eb0126ea72fbfc78a0942968dc21016fbd37de16818f9580ccffd\"" May 17 00:29:41.778382 containerd[1808]: time="2025-05-17T00:29:41.778367663Z" level=info msg="CreateContainer within sandbox \"b6b9fb6517c075415b70ddfb17727d322e95681766e96a4b1ace17c613d67bcd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:29:41.778680 containerd[1808]: time="2025-05-17T00:29:41.778670026Z" level=info msg="CreateContainer within sandbox \"f758baed8a2eb0126ea72fbfc78a0942968dc21016fbd37de16818f9580ccffd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:29:41.783964 containerd[1808]: time="2025-05-17T00:29:41.783949596Z" level=info msg="CreateContainer within sandbox \"31b99287494e817157c62a412942c0b4e5400f1f3df677b4735abe58fc48d694\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"afbd9a9bd2f044b75f63381374e373de5c43b1a23970db86ee11c98ea0e6a30a\"" May 17 00:29:41.784253 containerd[1808]: time="2025-05-17T00:29:41.784241464Z" level=info msg="StartContainer for \"afbd9a9bd2f044b75f63381374e373de5c43b1a23970db86ee11c98ea0e6a30a\"" May 17 00:29:41.785190 containerd[1808]: time="2025-05-17T00:29:41.785176067Z" level=info msg="CreateContainer within sandbox \"b6b9fb6517c075415b70ddfb17727d322e95681766e96a4b1ace17c613d67bcd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b96f7cbcaab632a4526e59d1f0f59e8d5d5cd524b20147513900f2510c426c85\"" May 17 00:29:41.785319 containerd[1808]: time="2025-05-17T00:29:41.785307728Z" level=info msg="CreateContainer within sandbox \"f758baed8a2eb0126ea72fbfc78a0942968dc21016fbd37de16818f9580ccffd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2430b220f2ef468ff6a286b31332a702dd72df36fe5a4e44a9e6707cd3873b71\"" May 17 00:29:41.785353 containerd[1808]: time="2025-05-17T00:29:41.785334622Z" level=info msg="StartContainer for \"b96f7cbcaab632a4526e59d1f0f59e8d5d5cd524b20147513900f2510c426c85\"" May 17 00:29:41.785559 containerd[1808]: time="2025-05-17T00:29:41.785475678Z" level=info msg="StartContainer for \"2430b220f2ef468ff6a286b31332a702dd72df36fe5a4e44a9e6707cd3873b71\"" May 17 00:29:41.791196 kubelet[2642]: W0517 00:29:41.791159 2642 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.75.203.231:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-n-65a4af4639&limit=500&resourceVersion=0": dial tcp 147.75.203.231:6443: connect: connection refused May 17 00:29:41.791278 kubelet[2642]: E0517 00:29:41.791203 2642 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.75.203.231:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-n-65a4af4639&limit=500&resourceVersion=0\": dial tcp 147.75.203.231:6443: connect: connection refused" logger="UnhandledError" May 17 00:29:41.815699 systemd[1]: Started cri-containerd-afbd9a9bd2f044b75f63381374e373de5c43b1a23970db86ee11c98ea0e6a30a.scope - libcontainer container afbd9a9bd2f044b75f63381374e373de5c43b1a23970db86ee11c98ea0e6a30a. May 17 00:29:41.817791 systemd[1]: Started cri-containerd-2430b220f2ef468ff6a286b31332a702dd72df36fe5a4e44a9e6707cd3873b71.scope - libcontainer container 2430b220f2ef468ff6a286b31332a702dd72df36fe5a4e44a9e6707cd3873b71. May 17 00:29:41.818401 systemd[1]: Started cri-containerd-b96f7cbcaab632a4526e59d1f0f59e8d5d5cd524b20147513900f2510c426c85.scope - libcontainer container b96f7cbcaab632a4526e59d1f0f59e8d5d5cd524b20147513900f2510c426c85. May 17 00:29:41.841496 containerd[1808]: time="2025-05-17T00:29:41.841463963Z" level=info msg="StartContainer for \"afbd9a9bd2f044b75f63381374e373de5c43b1a23970db86ee11c98ea0e6a30a\" returns successfully" May 17 00:29:41.860677 containerd[1808]: time="2025-05-17T00:29:41.860646819Z" level=info msg="StartContainer for \"b96f7cbcaab632a4526e59d1f0f59e8d5d5cd524b20147513900f2510c426c85\" returns successfully" May 17 00:29:41.860766 containerd[1808]: time="2025-05-17T00:29:41.860664495Z" level=info msg="StartContainer for \"2430b220f2ef468ff6a286b31332a702dd72df36fe5a4e44a9e6707cd3873b71\" returns successfully" May 17 00:29:42.204554 kubelet[2642]: I0517 00:29:42.204504 2642 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.3-n-65a4af4639" May 17 00:29:42.314019 kubelet[2642]: E0517 00:29:42.313986 2642 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.3-n-65a4af4639\" not found" node="ci-4081.3.3-n-65a4af4639" May 17 00:29:42.444984 kubelet[2642]: I0517 00:29:42.444935 2642 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.3-n-65a4af4639" May 17 00:29:42.444984 kubelet[2642]: E0517 00:29:42.444960 2642 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081.3.3-n-65a4af4639\": node \"ci-4081.3.3-n-65a4af4639\" not found" May 17 00:29:42.450037 kubelet[2642]: E0517 00:29:42.450024 2642 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-65a4af4639\" not found" May 17 00:29:42.550569 kubelet[2642]: E0517 00:29:42.550530 2642 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-65a4af4639\" not found" May 17 00:29:42.651193 kubelet[2642]: E0517 00:29:42.651150 2642 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-65a4af4639\" not found" May 17 00:29:42.752028 kubelet[2642]: E0517 00:29:42.751908 2642 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-65a4af4639\" not found" May 17 00:29:42.852957 kubelet[2642]: E0517 00:29:42.852719 2642 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-65a4af4639\" not found" May 17 00:29:42.954022 kubelet[2642]: E0517 00:29:42.953918 2642 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-65a4af4639\" not found" May 17 00:29:43.054825 kubelet[2642]: E0517 00:29:43.054708 2642 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-65a4af4639\" not found" May 17 00:29:43.156036 kubelet[2642]: E0517 00:29:43.155797 2642 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-65a4af4639\" not found" May 17 00:29:43.256553 kubelet[2642]: E0517 00:29:43.256480 2642 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-65a4af4639\" not found" May 17 00:29:43.357510 kubelet[2642]: E0517 00:29:43.357438 2642 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-65a4af4639\" not found" May 17 00:29:43.458526 kubelet[2642]: E0517 00:29:43.458319 2642 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-65a4af4639\" not found" May 17 00:29:43.559247 kubelet[2642]: E0517 00:29:43.559152 2642 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-65a4af4639\" not found" May 17 00:29:43.659819 kubelet[2642]: E0517 00:29:43.659720 2642 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-65a4af4639\" not found" May 17 00:29:43.761009 kubelet[2642]: E0517 00:29:43.760915 2642 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-65a4af4639\" not found" May 17 00:29:43.861810 kubelet[2642]: E0517 00:29:43.861706 2642 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-65a4af4639\" not found" May 17 00:29:43.962712 kubelet[2642]: E0517 00:29:43.962612 2642 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-65a4af4639\" not found" May 17 00:29:44.063931 kubelet[2642]: E0517 00:29:44.063675 2642 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-65a4af4639\" not found" May 17 00:29:44.165506 kubelet[2642]: E0517 00:29:44.165474 2642 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-65a4af4639\" not found" May 17 00:29:44.266558 kubelet[2642]: E0517 00:29:44.266441 2642 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-65a4af4639\" not found" May 17 00:29:44.367757 kubelet[2642]: E0517 00:29:44.367550 2642 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-65a4af4639\" not found" May 17 00:29:44.468693 kubelet[2642]: E0517 00:29:44.468574 2642 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-65a4af4639\" not found" May 17 00:29:44.611841 kubelet[2642]: I0517 00:29:44.611732 2642 apiserver.go:52] "Watching apiserver" May 17 00:29:44.615594 kubelet[2642]: I0517 00:29:44.615537 2642 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:29:44.881816 systemd[1]: Reloading requested from client PID 2966 ('systemctl') (unit session-11.scope)... May 17 00:29:44.881823 systemd[1]: Reloading... May 17 00:29:44.920380 zram_generator::config[3005]: No configuration found. May 17 00:29:44.986242 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:29:45.053737 systemd[1]: Reloading finished in 171 ms. May 17 00:29:45.075200 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:29:45.085851 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:29:45.085954 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:29:45.103795 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:29:45.347943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:29:45.350184 (kubelet)[3069]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:29:45.369002 kubelet[3069]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:29:45.369002 kubelet[3069]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:29:45.369002 kubelet[3069]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:29:45.369213 kubelet[3069]: I0517 00:29:45.369045 3069 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:29:45.372852 kubelet[3069]: I0517 00:29:45.372801 3069 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:29:45.372852 kubelet[3069]: I0517 00:29:45.372815 3069 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:29:45.372995 kubelet[3069]: I0517 00:29:45.372961 3069 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:29:45.373741 kubelet[3069]: I0517 00:29:45.373705 3069 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 00:29:45.374778 kubelet[3069]: I0517 00:29:45.374770 3069 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:29:45.376309 kubelet[3069]: E0517 00:29:45.376292 3069 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:29:45.376362 kubelet[3069]: I0517 00:29:45.376309 3069 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:29:45.383707 kubelet[3069]: I0517 00:29:45.383667 3069 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:29:45.383757 kubelet[3069]: I0517 00:29:45.383724 3069 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:29:45.383820 kubelet[3069]: I0517 00:29:45.383775 3069 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:29:45.383914 kubelet[3069]: I0517 00:29:45.383790 3069 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-n-65a4af4639","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:29:45.383914 kubelet[3069]: I0517 00:29:45.383890 3069 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:29:45.383914 kubelet[3069]: I0517 00:29:45.383897 3069 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:29:45.383914 kubelet[3069]: I0517 00:29:45.383911 3069 state_mem.go:36] "Initialized new in-memory state store" May 17 00:29:45.384022 kubelet[3069]: I0517 00:29:45.383963 3069 kubelet.go:408] "Attempting to sync node with API server" May 17 00:29:45.384022 kubelet[3069]: I0517 00:29:45.383969 3069 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:29:45.384022 kubelet[3069]: I0517 00:29:45.383986 3069 kubelet.go:314] "Adding apiserver pod source" May 17 00:29:45.384022 kubelet[3069]: I0517 00:29:45.383991 3069 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:29:45.384356 kubelet[3069]: I0517 00:29:45.384346 3069 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:29:45.384582 kubelet[3069]: I0517 00:29:45.384574 3069 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:29:45.384796 kubelet[3069]: I0517 00:29:45.384788 3069 server.go:1274] "Started kubelet" May 17 00:29:45.384831 kubelet[3069]: I0517 00:29:45.384815 3069 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:29:45.384895 kubelet[3069]: I0517 00:29:45.384845 3069 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:29:45.385056 kubelet[3069]: I0517 00:29:45.385045 3069 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:29:45.386311 kubelet[3069]: I0517 00:29:45.386175 3069 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:29:45.386376 kubelet[3069]: I0517 00:29:45.386336 3069 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:29:45.386376 kubelet[3069]: E0517 00:29:45.386342 3069 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-65a4af4639\" not found" May 17 00:29:45.386433 kubelet[3069]: I0517 00:29:45.386387 3069 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:29:45.386433 kubelet[3069]: I0517 00:29:45.386420 3069 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:29:45.386516 kubelet[3069]: I0517 00:29:45.386504 3069 reconciler.go:26] "Reconciler: start to sync state" May 17 00:29:45.386899 kubelet[3069]: I0517 00:29:45.386634 3069 server.go:449] "Adding debug handlers to kubelet server" May 17 00:29:45.386899 kubelet[3069]: E0517 00:29:45.386853 3069 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:29:45.387010 kubelet[3069]: I0517 00:29:45.387001 3069 factory.go:221] Registration of the systemd container factory successfully May 17 00:29:45.387102 kubelet[3069]: I0517 00:29:45.387083 3069 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:29:45.387955 kubelet[3069]: I0517 00:29:45.387946 3069 factory.go:221] Registration of the containerd container factory successfully May 17 00:29:45.391241 kubelet[3069]: I0517 00:29:45.391077 3069 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:29:45.391900 kubelet[3069]: I0517 00:29:45.391886 3069 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:29:45.391900 kubelet[3069]: I0517 00:29:45.391902 3069 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:29:45.391986 kubelet[3069]: I0517 00:29:45.391921 3069 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:29:45.391986 kubelet[3069]: E0517 00:29:45.391954 3069 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:29:45.401707 kubelet[3069]: I0517 00:29:45.401695 3069 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:29:45.401707 kubelet[3069]: I0517 00:29:45.401704 3069 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:29:45.401707 kubelet[3069]: I0517 00:29:45.401713 3069 state_mem.go:36] "Initialized new in-memory state store" May 17 00:29:45.401812 kubelet[3069]: I0517 00:29:45.401794 3069 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:29:45.401812 kubelet[3069]: I0517 00:29:45.401801 3069 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:29:45.401843 kubelet[3069]: I0517 00:29:45.401813 3069 policy_none.go:49] "None policy: Start" May 17 00:29:45.402083 kubelet[3069]: I0517 00:29:45.402075 3069 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:29:45.402106 kubelet[3069]: I0517 00:29:45.402086 3069 state_mem.go:35] "Initializing new in-memory state store" May 17 00:29:45.402152 kubelet[3069]: I0517 00:29:45.402147 3069 state_mem.go:75] "Updated machine memory state" May 17 00:29:45.404012 kubelet[3069]: I0517 00:29:45.404003 3069 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:29:45.404090 kubelet[3069]: I0517 00:29:45.404083 3069 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:29:45.404124 kubelet[3069]: I0517 00:29:45.404090 3069 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:29:45.404193 kubelet[3069]: I0517 00:29:45.404186 3069 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:29:45.504890 kubelet[3069]: W0517 00:29:45.504829 3069 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:29:45.505197 kubelet[3069]: W0517 00:29:45.505050 3069 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:29:45.505197 kubelet[3069]: W0517 00:29:45.505138 3069 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:29:45.511496 kubelet[3069]: I0517 00:29:45.511433 3069 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.3-n-65a4af4639" May 17 00:29:45.520765 kubelet[3069]: I0517 00:29:45.520702 3069 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.3.3-n-65a4af4639" May 17 00:29:45.520940 kubelet[3069]: I0517 00:29:45.520843 3069 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.3-n-65a4af4639" May 17 00:29:45.587278 kubelet[3069]: I0517 00:29:45.587140 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a143b24f9c1821ab0bbca96848324a62-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-n-65a4af4639\" (UID: \"a143b24f9c1821ab0bbca96848324a62\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-65a4af4639" May 17 00:29:45.688261 kubelet[3069]: I0517 00:29:45.688005 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a143b24f9c1821ab0bbca96848324a62-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-n-65a4af4639\" (UID: \"a143b24f9c1821ab0bbca96848324a62\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-65a4af4639" May 17 00:29:45.688261 kubelet[3069]: I0517 00:29:45.688111 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a143b24f9c1821ab0bbca96848324a62-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-n-65a4af4639\" (UID: \"a143b24f9c1821ab0bbca96848324a62\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-65a4af4639" May 17 00:29:45.688261 kubelet[3069]: I0517 00:29:45.688179 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/77ffc6f051d14e85fefe5f258f59f00f-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-n-65a4af4639\" (UID: \"77ffc6f051d14e85fefe5f258f59f00f\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-65a4af4639" May 17 00:29:45.688261 kubelet[3069]: I0517 00:29:45.688237 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/77ffc6f051d14e85fefe5f258f59f00f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-n-65a4af4639\" (UID: \"77ffc6f051d14e85fefe5f258f59f00f\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-65a4af4639" May 17 00:29:45.688838 kubelet[3069]: I0517 00:29:45.688289 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a143b24f9c1821ab0bbca96848324a62-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-n-65a4af4639\" (UID: \"a143b24f9c1821ab0bbca96848324a62\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-65a4af4639" May 17 00:29:45.688838 kubelet[3069]: I0517 00:29:45.688346 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a143b24f9c1821ab0bbca96848324a62-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-n-65a4af4639\" (UID: \"a143b24f9c1821ab0bbca96848324a62\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-65a4af4639" May 17 00:29:45.688838 kubelet[3069]: I0517 00:29:45.688440 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f9c40c91615c4b7a49f8ee1746e4c378-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-n-65a4af4639\" (UID: \"f9c40c91615c4b7a49f8ee1746e4c378\") " pod="kube-system/kube-scheduler-ci-4081.3.3-n-65a4af4639" May 17 00:29:45.688838 kubelet[3069]: I0517 00:29:45.688495 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/77ffc6f051d14e85fefe5f258f59f00f-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-n-65a4af4639\" (UID: \"77ffc6f051d14e85fefe5f258f59f00f\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-65a4af4639" May 17 00:29:46.384636 kubelet[3069]: I0517 00:29:46.384588 3069 apiserver.go:52] "Watching apiserver" May 17 00:29:46.386937 kubelet[3069]: I0517 00:29:46.386896 3069 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:29:46.397875 kubelet[3069]: W0517 00:29:46.397855 3069 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:29:46.397979 kubelet[3069]: E0517 00:29:46.397900 3069 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.3.3-n-65a4af4639\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.3-n-65a4af4639" May 17 00:29:46.405801 kubelet[3069]: I0517 00:29:46.405732 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.3-n-65a4af4639" podStartSLOduration=1.405718434 podStartE2EDuration="1.405718434s" podCreationTimestamp="2025-05-17 00:29:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:29:46.405681782 +0000 UTC m=+1.053614722" watchObservedRunningTime="2025-05-17 00:29:46.405718434 +0000 UTC m=+1.053651374" May 17 00:29:46.414660 kubelet[3069]: I0517 00:29:46.414176 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.3-n-65a4af4639" podStartSLOduration=1.414148869 podStartE2EDuration="1.414148869s" podCreationTimestamp="2025-05-17 00:29:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:29:46.409543883 +0000 UTC m=+1.057476822" watchObservedRunningTime="2025-05-17 00:29:46.414148869 +0000 UTC m=+1.062081816" May 17 00:29:46.418808 kubelet[3069]: I0517 00:29:46.418740 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.3-n-65a4af4639" podStartSLOduration=1.418727844 podStartE2EDuration="1.418727844s" podCreationTimestamp="2025-05-17 00:29:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:29:46.414382659 +0000 UTC m=+1.062315605" watchObservedRunningTime="2025-05-17 00:29:46.418727844 +0000 UTC m=+1.066660781" May 17 00:29:50.277693 kubelet[3069]: I0517 00:29:50.277628 3069 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:29:50.278033 containerd[1808]: time="2025-05-17T00:29:50.277871959Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:29:50.278216 kubelet[3069]: I0517 00:29:50.278046 3069 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:29:51.318714 systemd[1]: Created slice kubepods-besteffort-pod1fd52e66_5638_4f2e_bf16_b11a27767fbb.slice - libcontainer container kubepods-besteffort-pod1fd52e66_5638_4f2e_bf16_b11a27767fbb.slice. May 17 00:29:51.330437 kubelet[3069]: I0517 00:29:51.330290 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1fd52e66-5638-4f2e-bf16-b11a27767fbb-var-lib-calico\") pod \"tigera-operator-7c5755cdcb-8ntmx\" (UID: \"1fd52e66-5638-4f2e-bf16-b11a27767fbb\") " pod="tigera-operator/tigera-operator-7c5755cdcb-8ntmx" May 17 00:29:51.330437 kubelet[3069]: I0517 00:29:51.330425 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrc8j\" (UniqueName: \"kubernetes.io/projected/1fd52e66-5638-4f2e-bf16-b11a27767fbb-kube-api-access-vrc8j\") pod \"tigera-operator-7c5755cdcb-8ntmx\" (UID: \"1fd52e66-5638-4f2e-bf16-b11a27767fbb\") " pod="tigera-operator/tigera-operator-7c5755cdcb-8ntmx" May 17 00:29:51.431236 kubelet[3069]: I0517 00:29:51.431123 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/94af423c-4a9f-4512-9d4c-de017f15c6e6-xtables-lock\") pod \"kube-proxy-c6659\" (UID: \"94af423c-4a9f-4512-9d4c-de017f15c6e6\") " pod="kube-system/kube-proxy-c6659" May 17 00:29:51.431577 kubelet[3069]: I0517 00:29:51.431260 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/94af423c-4a9f-4512-9d4c-de017f15c6e6-lib-modules\") pod \"kube-proxy-c6659\" (UID: \"94af423c-4a9f-4512-9d4c-de017f15c6e6\") " pod="kube-system/kube-proxy-c6659" May 17 00:29:51.431577 kubelet[3069]: I0517 00:29:51.431318 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/94af423c-4a9f-4512-9d4c-de017f15c6e6-kube-proxy\") pod \"kube-proxy-c6659\" (UID: \"94af423c-4a9f-4512-9d4c-de017f15c6e6\") " pod="kube-system/kube-proxy-c6659" May 17 00:29:51.431577 kubelet[3069]: I0517 00:29:51.431414 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hx8l\" (UniqueName: \"kubernetes.io/projected/94af423c-4a9f-4512-9d4c-de017f15c6e6-kube-api-access-9hx8l\") pod \"kube-proxy-c6659\" (UID: \"94af423c-4a9f-4512-9d4c-de017f15c6e6\") " pod="kube-system/kube-proxy-c6659" May 17 00:29:51.433217 systemd[1]: Created slice kubepods-besteffort-pod94af423c_4a9f_4512_9d4c_de017f15c6e6.slice - libcontainer container kubepods-besteffort-pod94af423c_4a9f_4512_9d4c_de017f15c6e6.slice. May 17 00:29:51.628925 containerd[1808]: time="2025-05-17T00:29:51.628677318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-8ntmx,Uid:1fd52e66-5638-4f2e-bf16-b11a27767fbb,Namespace:tigera-operator,Attempt:0,}" May 17 00:29:51.640052 containerd[1808]: time="2025-05-17T00:29:51.639999017Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:29:51.640052 containerd[1808]: time="2025-05-17T00:29:51.640041413Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:29:51.640052 containerd[1808]: time="2025-05-17T00:29:51.640051251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:29:51.640207 containerd[1808]: time="2025-05-17T00:29:51.640098318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:29:51.669856 systemd[1]: Started cri-containerd-b7bd7df6ca08521aa71f4052c0d11df536125f2521f29836505121fac4c74f76.scope - libcontainer container b7bd7df6ca08521aa71f4052c0d11df536125f2521f29836505121fac4c74f76. May 17 00:29:51.738878 containerd[1808]: time="2025-05-17T00:29:51.738847711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c6659,Uid:94af423c-4a9f-4512-9d4c-de017f15c6e6,Namespace:kube-system,Attempt:0,}" May 17 00:29:51.742662 containerd[1808]: time="2025-05-17T00:29:51.742641602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-8ntmx,Uid:1fd52e66-5638-4f2e-bf16-b11a27767fbb,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b7bd7df6ca08521aa71f4052c0d11df536125f2521f29836505121fac4c74f76\"" May 17 00:29:51.743461 containerd[1808]: time="2025-05-17T00:29:51.743449734Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 17 00:29:51.748371 containerd[1808]: time="2025-05-17T00:29:51.748298991Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:29:51.748371 containerd[1808]: time="2025-05-17T00:29:51.748327870Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:29:51.748371 containerd[1808]: time="2025-05-17T00:29:51.748335076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:29:51.748477 containerd[1808]: time="2025-05-17T00:29:51.748378362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:29:51.767648 systemd[1]: Started cri-containerd-027c529155a56d7b03c392f4b0e87778701501e78b222cf122e0a175e32a11ec.scope - libcontainer container 027c529155a56d7b03c392f4b0e87778701501e78b222cf122e0a175e32a11ec. May 17 00:29:51.777946 containerd[1808]: time="2025-05-17T00:29:51.777926105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c6659,Uid:94af423c-4a9f-4512-9d4c-de017f15c6e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"027c529155a56d7b03c392f4b0e87778701501e78b222cf122e0a175e32a11ec\"" May 17 00:29:51.779076 containerd[1808]: time="2025-05-17T00:29:51.779060353Z" level=info msg="CreateContainer within sandbox \"027c529155a56d7b03c392f4b0e87778701501e78b222cf122e0a175e32a11ec\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:29:51.784557 containerd[1808]: time="2025-05-17T00:29:51.784513018Z" level=info msg="CreateContainer within sandbox \"027c529155a56d7b03c392f4b0e87778701501e78b222cf122e0a175e32a11ec\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d529e7000b6113e8f1b89d1a79525981ed7893c1874f9ce8e15dd588828c0788\"" May 17 00:29:51.784795 containerd[1808]: time="2025-05-17T00:29:51.784755624Z" level=info msg="StartContainer for \"d529e7000b6113e8f1b89d1a79525981ed7893c1874f9ce8e15dd588828c0788\"" May 17 00:29:51.812641 systemd[1]: Started cri-containerd-d529e7000b6113e8f1b89d1a79525981ed7893c1874f9ce8e15dd588828c0788.scope - libcontainer container d529e7000b6113e8f1b89d1a79525981ed7893c1874f9ce8e15dd588828c0788. May 17 00:29:51.829488 containerd[1808]: time="2025-05-17T00:29:51.829433816Z" level=info msg="StartContainer for \"d529e7000b6113e8f1b89d1a79525981ed7893c1874f9ce8e15dd588828c0788\" returns successfully" May 17 00:29:52.432637 kubelet[3069]: I0517 00:29:52.432522 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-c6659" podStartSLOduration=1.432484154 podStartE2EDuration="1.432484154s" podCreationTimestamp="2025-05-17 00:29:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:29:52.432035617 +0000 UTC m=+7.079968644" watchObservedRunningTime="2025-05-17 00:29:52.432484154 +0000 UTC m=+7.080417150" May 17 00:29:53.750950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount642029037.mount: Deactivated successfully. May 17 00:29:54.215825 containerd[1808]: time="2025-05-17T00:29:54.215740115Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:29:54.216018 containerd[1808]: time="2025-05-17T00:29:54.215960906Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=25055451" May 17 00:29:54.216309 containerd[1808]: time="2025-05-17T00:29:54.216273826Z" level=info msg="ImageCreate event name:\"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:29:54.217519 containerd[1808]: time="2025-05-17T00:29:54.217475391Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:29:54.217986 containerd[1808]: time="2025-05-17T00:29:54.217944392Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"25051446\" in 2.474478478s" May 17 00:29:54.217986 containerd[1808]: time="2025-05-17T00:29:54.217959628Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 17 00:29:54.218952 containerd[1808]: time="2025-05-17T00:29:54.218938151Z" level=info msg="CreateContainer within sandbox \"b7bd7df6ca08521aa71f4052c0d11df536125f2521f29836505121fac4c74f76\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 17 00:29:54.222714 containerd[1808]: time="2025-05-17T00:29:54.222696740Z" level=info msg="CreateContainer within sandbox \"b7bd7df6ca08521aa71f4052c0d11df536125f2521f29836505121fac4c74f76\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9cf6de754ef6e282c1306ae22bed3f58c2f74e9a3438b63f914a0f2860c1fd18\"" May 17 00:29:54.222891 containerd[1808]: time="2025-05-17T00:29:54.222877014Z" level=info msg="StartContainer for \"9cf6de754ef6e282c1306ae22bed3f58c2f74e9a3438b63f914a0f2860c1fd18\"" May 17 00:29:54.253821 systemd[1]: Started cri-containerd-9cf6de754ef6e282c1306ae22bed3f58c2f74e9a3438b63f914a0f2860c1fd18.scope - libcontainer container 9cf6de754ef6e282c1306ae22bed3f58c2f74e9a3438b63f914a0f2860c1fd18. May 17 00:29:54.304547 containerd[1808]: time="2025-05-17T00:29:54.304509627Z" level=info msg="StartContainer for \"9cf6de754ef6e282c1306ae22bed3f58c2f74e9a3438b63f914a0f2860c1fd18\" returns successfully" May 17 00:29:54.496571 kubelet[3069]: I0517 00:29:54.496459 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7c5755cdcb-8ntmx" podStartSLOduration=1.02132543 podStartE2EDuration="3.496418107s" podCreationTimestamp="2025-05-17 00:29:51 +0000 UTC" firstStartedPulling="2025-05-17 00:29:51.743277463 +0000 UTC m=+6.391210402" lastFinishedPulling="2025-05-17 00:29:54.21837014 +0000 UTC m=+8.866303079" observedRunningTime="2025-05-17 00:29:54.423142611 +0000 UTC m=+9.071075560" watchObservedRunningTime="2025-05-17 00:29:54.496418107 +0000 UTC m=+9.144351098" May 17 00:29:58.807300 sudo[2086]: pam_unix(sudo:session): session closed for user root May 17 00:29:58.808350 sshd[2083]: pam_unix(sshd:session): session closed for user core May 17 00:29:58.810940 systemd[1]: sshd@8-147.75.203.231:22-147.75.109.163:44884.service: Deactivated successfully. May 17 00:29:58.811996 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:29:58.812120 systemd[1]: session-11.scope: Consumed 3.149s CPU time, 170.8M memory peak, 0B memory swap peak. May 17 00:29:58.812517 systemd-logind[1798]: Session 11 logged out. Waiting for processes to exit. May 17 00:29:58.813156 systemd-logind[1798]: Removed session 11. May 17 00:29:59.471454 update_engine[1803]: I20250517 00:29:59.471418 1803 update_attempter.cc:509] Updating boot flags... May 17 00:29:59.500375 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (3602) May 17 00:29:59.543376 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (3602) May 17 00:29:59.572374 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (3602) May 17 00:30:01.215964 systemd[1]: Created slice kubepods-besteffort-podf31262cc_3dd1_4c2c_8e22_32f9d3e6f1a6.slice - libcontainer container kubepods-besteffort-podf31262cc_3dd1_4c2c_8e22_32f9d3e6f1a6.slice. May 17 00:30:01.299640 kubelet[3069]: I0517 00:30:01.299555 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f31262cc-3dd1-4c2c-8e22-32f9d3e6f1a6-typha-certs\") pod \"calico-typha-6fd6bfb688-xjd6t\" (UID: \"f31262cc-3dd1-4c2c-8e22-32f9d3e6f1a6\") " pod="calico-system/calico-typha-6fd6bfb688-xjd6t" May 17 00:30:01.300581 kubelet[3069]: I0517 00:30:01.299765 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x9bv\" (UniqueName: \"kubernetes.io/projected/f31262cc-3dd1-4c2c-8e22-32f9d3e6f1a6-kube-api-access-4x9bv\") pod \"calico-typha-6fd6bfb688-xjd6t\" (UID: \"f31262cc-3dd1-4c2c-8e22-32f9d3e6f1a6\") " pod="calico-system/calico-typha-6fd6bfb688-xjd6t" May 17 00:30:01.300581 kubelet[3069]: I0517 00:30:01.299875 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f31262cc-3dd1-4c2c-8e22-32f9d3e6f1a6-tigera-ca-bundle\") pod \"calico-typha-6fd6bfb688-xjd6t\" (UID: \"f31262cc-3dd1-4c2c-8e22-32f9d3e6f1a6\") " pod="calico-system/calico-typha-6fd6bfb688-xjd6t" May 17 00:30:01.520174 containerd[1808]: time="2025-05-17T00:30:01.520066403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6fd6bfb688-xjd6t,Uid:f31262cc-3dd1-4c2c-8e22-32f9d3e6f1a6,Namespace:calico-system,Attempt:0,}" May 17 00:30:01.532122 containerd[1808]: time="2025-05-17T00:30:01.532075740Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:30:01.532210 containerd[1808]: time="2025-05-17T00:30:01.532123423Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:30:01.532336 containerd[1808]: time="2025-05-17T00:30:01.532324586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:01.532434 containerd[1808]: time="2025-05-17T00:30:01.532380937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:01.533168 systemd[1]: Created slice kubepods-besteffort-poda89e0ae9_094c_4796_8522_daf36e0a1d92.slice - libcontainer container kubepods-besteffort-poda89e0ae9_094c_4796_8522_daf36e0a1d92.slice. May 17 00:30:01.551926 systemd[1]: Started cri-containerd-7a9052dc6a83e0ba277b8100b566c885d0f8006246b776c758e05c467bab6191.scope - libcontainer container 7a9052dc6a83e0ba277b8100b566c885d0f8006246b776c758e05c467bab6191. May 17 00:30:01.602779 kubelet[3069]: I0517 00:30:01.602739 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a89e0ae9-094c-4796-8522-daf36e0a1d92-xtables-lock\") pod \"calico-node-9njrx\" (UID: \"a89e0ae9-094c-4796-8522-daf36e0a1d92\") " pod="calico-system/calico-node-9njrx" May 17 00:30:01.602914 kubelet[3069]: I0517 00:30:01.602793 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a89e0ae9-094c-4796-8522-daf36e0a1d92-cni-net-dir\") pod \"calico-node-9njrx\" (UID: \"a89e0ae9-094c-4796-8522-daf36e0a1d92\") " pod="calico-system/calico-node-9njrx" May 17 00:30:01.602914 kubelet[3069]: I0517 00:30:01.602823 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a89e0ae9-094c-4796-8522-daf36e0a1d92-var-run-calico\") pod \"calico-node-9njrx\" (UID: \"a89e0ae9-094c-4796-8522-daf36e0a1d92\") " pod="calico-system/calico-node-9njrx" May 17 00:30:01.602914 kubelet[3069]: I0517 00:30:01.602848 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a89e0ae9-094c-4796-8522-daf36e0a1d92-var-lib-calico\") pod \"calico-node-9njrx\" (UID: \"a89e0ae9-094c-4796-8522-daf36e0a1d92\") " pod="calico-system/calico-node-9njrx" May 17 00:30:01.602914 kubelet[3069]: I0517 00:30:01.602891 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a89e0ae9-094c-4796-8522-daf36e0a1d92-lib-modules\") pod \"calico-node-9njrx\" (UID: \"a89e0ae9-094c-4796-8522-daf36e0a1d92\") " pod="calico-system/calico-node-9njrx" May 17 00:30:01.603156 kubelet[3069]: I0517 00:30:01.602918 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a89e0ae9-094c-4796-8522-daf36e0a1d92-cni-log-dir\") pod \"calico-node-9njrx\" (UID: \"a89e0ae9-094c-4796-8522-daf36e0a1d92\") " pod="calico-system/calico-node-9njrx" May 17 00:30:01.603156 kubelet[3069]: I0517 00:30:01.602954 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a89e0ae9-094c-4796-8522-daf36e0a1d92-cni-bin-dir\") pod \"calico-node-9njrx\" (UID: \"a89e0ae9-094c-4796-8522-daf36e0a1d92\") " pod="calico-system/calico-node-9njrx" May 17 00:30:01.603156 kubelet[3069]: I0517 00:30:01.602983 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a89e0ae9-094c-4796-8522-daf36e0a1d92-node-certs\") pod \"calico-node-9njrx\" (UID: \"a89e0ae9-094c-4796-8522-daf36e0a1d92\") " pod="calico-system/calico-node-9njrx" May 17 00:30:01.603156 kubelet[3069]: I0517 00:30:01.603027 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a89e0ae9-094c-4796-8522-daf36e0a1d92-policysync\") pod \"calico-node-9njrx\" (UID: \"a89e0ae9-094c-4796-8522-daf36e0a1d92\") " pod="calico-system/calico-node-9njrx" May 17 00:30:01.603156 kubelet[3069]: I0517 00:30:01.603066 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a89e0ae9-094c-4796-8522-daf36e0a1d92-flexvol-driver-host\") pod \"calico-node-9njrx\" (UID: \"a89e0ae9-094c-4796-8522-daf36e0a1d92\") " pod="calico-system/calico-node-9njrx" May 17 00:30:01.603417 kubelet[3069]: I0517 00:30:01.603110 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a89e0ae9-094c-4796-8522-daf36e0a1d92-tigera-ca-bundle\") pod \"calico-node-9njrx\" (UID: \"a89e0ae9-094c-4796-8522-daf36e0a1d92\") " pod="calico-system/calico-node-9njrx" May 17 00:30:01.603417 kubelet[3069]: I0517 00:30:01.603141 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr4b5\" (UniqueName: \"kubernetes.io/projected/a89e0ae9-094c-4796-8522-daf36e0a1d92-kube-api-access-jr4b5\") pod \"calico-node-9njrx\" (UID: \"a89e0ae9-094c-4796-8522-daf36e0a1d92\") " pod="calico-system/calico-node-9njrx" May 17 00:30:01.626723 containerd[1808]: time="2025-05-17T00:30:01.626692373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6fd6bfb688-xjd6t,Uid:f31262cc-3dd1-4c2c-8e22-32f9d3e6f1a6,Namespace:calico-system,Attempt:0,} returns sandbox id \"7a9052dc6a83e0ba277b8100b566c885d0f8006246b776c758e05c467bab6191\"" May 17 00:30:01.627556 containerd[1808]: time="2025-05-17T00:30:01.627542820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 17 00:30:01.708648 kubelet[3069]: E0517 00:30:01.708548 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.708648 kubelet[3069]: W0517 00:30:01.708613 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.708990 kubelet[3069]: E0517 00:30:01.708705 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.712227 kubelet[3069]: E0517 00:30:01.712153 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.712227 kubelet[3069]: W0517 00:30:01.712194 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.712227 kubelet[3069]: E0517 00:30:01.712233 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.724817 kubelet[3069]: E0517 00:30:01.724724 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.724817 kubelet[3069]: W0517 00:30:01.724762 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.724817 kubelet[3069]: E0517 00:30:01.724801 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.784931 kubelet[3069]: E0517 00:30:01.784695 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zdwc8" podUID="8e114f42-fc56-4b66-8d81-a37f65ab357c" May 17 00:30:01.802050 kubelet[3069]: E0517 00:30:01.802005 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.802050 kubelet[3069]: W0517 00:30:01.802044 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.802460 kubelet[3069]: E0517 00:30:01.802088 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.802670 kubelet[3069]: E0517 00:30:01.802635 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.802670 kubelet[3069]: W0517 00:30:01.802665 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.802978 kubelet[3069]: E0517 00:30:01.802699 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.803144 kubelet[3069]: E0517 00:30:01.803121 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.803144 kubelet[3069]: W0517 00:30:01.803140 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.803328 kubelet[3069]: E0517 00:30:01.803164 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.803499 kubelet[3069]: E0517 00:30:01.803476 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.803499 kubelet[3069]: W0517 00:30:01.803495 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.803706 kubelet[3069]: E0517 00:30:01.803519 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.803837 kubelet[3069]: E0517 00:30:01.803816 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.803939 kubelet[3069]: W0517 00:30:01.803835 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.803939 kubelet[3069]: E0517 00:30:01.803859 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.804245 kubelet[3069]: E0517 00:30:01.804224 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.804303 kubelet[3069]: W0517 00:30:01.804246 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.804303 kubelet[3069]: E0517 00:30:01.804268 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.804646 kubelet[3069]: E0517 00:30:01.804621 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.804646 kubelet[3069]: W0517 00:30:01.804640 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.804831 kubelet[3069]: E0517 00:30:01.804680 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.805013 kubelet[3069]: E0517 00:30:01.804995 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.805077 kubelet[3069]: W0517 00:30:01.805015 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.805077 kubelet[3069]: E0517 00:30:01.805034 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.805339 kubelet[3069]: E0517 00:30:01.805324 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.805422 kubelet[3069]: W0517 00:30:01.805340 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.805422 kubelet[3069]: E0517 00:30:01.805355 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.805719 kubelet[3069]: E0517 00:30:01.805695 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.805794 kubelet[3069]: W0517 00:30:01.805725 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.805794 kubelet[3069]: E0517 00:30:01.805755 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.806102 kubelet[3069]: E0517 00:30:01.806083 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.806102 kubelet[3069]: W0517 00:30:01.806099 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.806212 kubelet[3069]: E0517 00:30:01.806118 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.806420 kubelet[3069]: E0517 00:30:01.806401 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.806420 kubelet[3069]: W0517 00:30:01.806415 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.806554 kubelet[3069]: E0517 00:30:01.806430 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.806689 kubelet[3069]: E0517 00:30:01.806673 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.806752 kubelet[3069]: W0517 00:30:01.806686 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.806752 kubelet[3069]: E0517 00:30:01.806702 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.807025 kubelet[3069]: E0517 00:30:01.807003 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.807100 kubelet[3069]: W0517 00:30:01.807024 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.807100 kubelet[3069]: E0517 00:30:01.807048 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.807327 kubelet[3069]: E0517 00:30:01.807307 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.807420 kubelet[3069]: W0517 00:30:01.807328 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.807420 kubelet[3069]: E0517 00:30:01.807352 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.807701 kubelet[3069]: E0517 00:30:01.807684 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.807701 kubelet[3069]: W0517 00:30:01.807698 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.807843 kubelet[3069]: E0517 00:30:01.807713 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.807980 kubelet[3069]: E0517 00:30:01.807961 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.807980 kubelet[3069]: W0517 00:30:01.807975 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.808102 kubelet[3069]: E0517 00:30:01.807989 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.808213 kubelet[3069]: E0517 00:30:01.808200 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.808213 kubelet[3069]: W0517 00:30:01.808211 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.808303 kubelet[3069]: E0517 00:30:01.808222 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.808435 kubelet[3069]: E0517 00:30:01.808422 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.808489 kubelet[3069]: W0517 00:30:01.808435 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.808489 kubelet[3069]: E0517 00:30:01.808446 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.808683 kubelet[3069]: E0517 00:30:01.808667 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.808732 kubelet[3069]: W0517 00:30:01.808687 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.808732 kubelet[3069]: E0517 00:30:01.808703 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.808990 kubelet[3069]: E0517 00:30:01.808977 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.809046 kubelet[3069]: W0517 00:30:01.808993 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.809046 kubelet[3069]: E0517 00:30:01.809009 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.809132 kubelet[3069]: I0517 00:30:01.809041 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8e114f42-fc56-4b66-8d81-a37f65ab357c-socket-dir\") pod \"csi-node-driver-zdwc8\" (UID: \"8e114f42-fc56-4b66-8d81-a37f65ab357c\") " pod="calico-system/csi-node-driver-zdwc8" May 17 00:30:01.809238 kubelet[3069]: E0517 00:30:01.809223 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.809238 kubelet[3069]: W0517 00:30:01.809236 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.809340 kubelet[3069]: E0517 00:30:01.809251 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.809340 kubelet[3069]: I0517 00:30:01.809271 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjgr4\" (UniqueName: \"kubernetes.io/projected/8e114f42-fc56-4b66-8d81-a37f65ab357c-kube-api-access-jjgr4\") pod \"csi-node-driver-zdwc8\" (UID: \"8e114f42-fc56-4b66-8d81-a37f65ab357c\") " pod="calico-system/csi-node-driver-zdwc8" May 17 00:30:01.809633 kubelet[3069]: E0517 00:30:01.809611 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.809633 kubelet[3069]: W0517 00:30:01.809630 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.809747 kubelet[3069]: E0517 00:30:01.809651 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.809957 kubelet[3069]: E0517 00:30:01.809941 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.809957 kubelet[3069]: W0517 00:30:01.809954 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.810061 kubelet[3069]: E0517 00:30:01.809972 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.810211 kubelet[3069]: E0517 00:30:01.810194 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.810265 kubelet[3069]: W0517 00:30:01.810213 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.810265 kubelet[3069]: E0517 00:30:01.810236 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.810345 kubelet[3069]: I0517 00:30:01.810268 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e114f42-fc56-4b66-8d81-a37f65ab357c-kubelet-dir\") pod \"csi-node-driver-zdwc8\" (UID: \"8e114f42-fc56-4b66-8d81-a37f65ab357c\") " pod="calico-system/csi-node-driver-zdwc8" May 17 00:30:01.810527 kubelet[3069]: E0517 00:30:01.810510 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.810527 kubelet[3069]: W0517 00:30:01.810525 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.810617 kubelet[3069]: E0517 00:30:01.810541 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.810744 kubelet[3069]: E0517 00:30:01.810731 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.810744 kubelet[3069]: W0517 00:30:01.810742 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.810838 kubelet[3069]: E0517 00:30:01.810756 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.810966 kubelet[3069]: E0517 00:30:01.810952 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.810966 kubelet[3069]: W0517 00:30:01.810964 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.811049 kubelet[3069]: E0517 00:30:01.810979 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.811049 kubelet[3069]: I0517 00:30:01.811002 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8e114f42-fc56-4b66-8d81-a37f65ab357c-registration-dir\") pod \"csi-node-driver-zdwc8\" (UID: \"8e114f42-fc56-4b66-8d81-a37f65ab357c\") " pod="calico-system/csi-node-driver-zdwc8" May 17 00:30:01.811227 kubelet[3069]: E0517 00:30:01.811211 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.811269 kubelet[3069]: W0517 00:30:01.811229 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.811269 kubelet[3069]: E0517 00:30:01.811247 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.811495 kubelet[3069]: E0517 00:30:01.811479 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.811565 kubelet[3069]: W0517 00:30:01.811495 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.811565 kubelet[3069]: E0517 00:30:01.811514 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.811769 kubelet[3069]: E0517 00:30:01.811757 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.811830 kubelet[3069]: W0517 00:30:01.811769 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.811830 kubelet[3069]: E0517 00:30:01.811784 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.811830 kubelet[3069]: I0517 00:30:01.811806 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8e114f42-fc56-4b66-8d81-a37f65ab357c-varrun\") pod \"csi-node-driver-zdwc8\" (UID: \"8e114f42-fc56-4b66-8d81-a37f65ab357c\") " pod="calico-system/csi-node-driver-zdwc8" May 17 00:30:01.812045 kubelet[3069]: E0517 00:30:01.812031 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.812086 kubelet[3069]: W0517 00:30:01.812045 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.812086 kubelet[3069]: E0517 00:30:01.812061 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.812261 kubelet[3069]: E0517 00:30:01.812249 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.812315 kubelet[3069]: W0517 00:30:01.812264 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.812315 kubelet[3069]: E0517 00:30:01.812298 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.812511 kubelet[3069]: E0517 00:30:01.812499 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.812567 kubelet[3069]: W0517 00:30:01.812511 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.812567 kubelet[3069]: E0517 00:30:01.812523 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.812750 kubelet[3069]: E0517 00:30:01.812739 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.812795 kubelet[3069]: W0517 00:30:01.812750 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.812795 kubelet[3069]: E0517 00:30:01.812761 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.837279 containerd[1808]: time="2025-05-17T00:30:01.837242043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9njrx,Uid:a89e0ae9-094c-4796-8522-daf36e0a1d92,Namespace:calico-system,Attempt:0,}" May 17 00:30:01.847640 containerd[1808]: time="2025-05-17T00:30:01.847371357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:30:01.847640 containerd[1808]: time="2025-05-17T00:30:01.847597696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:30:01.847640 containerd[1808]: time="2025-05-17T00:30:01.847605984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:01.847766 containerd[1808]: time="2025-05-17T00:30:01.847645774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:01.870659 systemd[1]: Started cri-containerd-038315c108ea92147b162c291283800e77183ca74c89db069aa563441dde6615.scope - libcontainer container 038315c108ea92147b162c291283800e77183ca74c89db069aa563441dde6615. May 17 00:30:01.883841 containerd[1808]: time="2025-05-17T00:30:01.883773659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9njrx,Uid:a89e0ae9-094c-4796-8522-daf36e0a1d92,Namespace:calico-system,Attempt:0,} returns sandbox id \"038315c108ea92147b162c291283800e77183ca74c89db069aa563441dde6615\"" May 17 00:30:01.913510 kubelet[3069]: E0517 00:30:01.913467 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.913510 kubelet[3069]: W0517 00:30:01.913504 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.913826 kubelet[3069]: E0517 00:30:01.913545 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.914008 kubelet[3069]: E0517 00:30:01.913978 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.914008 kubelet[3069]: W0517 00:30:01.914003 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.914255 kubelet[3069]: E0517 00:30:01.914039 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.914495 kubelet[3069]: E0517 00:30:01.914439 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.914495 kubelet[3069]: W0517 00:30:01.914464 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.914495 kubelet[3069]: E0517 00:30:01.914500 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.914878 kubelet[3069]: E0517 00:30:01.914856 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.914967 kubelet[3069]: W0517 00:30:01.914879 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.914967 kubelet[3069]: E0517 00:30:01.914903 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.915314 kubelet[3069]: E0517 00:30:01.915290 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.915465 kubelet[3069]: W0517 00:30:01.915315 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.915465 kubelet[3069]: E0517 00:30:01.915345 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.915812 kubelet[3069]: E0517 00:30:01.915784 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.915970 kubelet[3069]: W0517 00:30:01.915811 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.915970 kubelet[3069]: E0517 00:30:01.915852 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.916296 kubelet[3069]: E0517 00:30:01.916272 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.916416 kubelet[3069]: W0517 00:30:01.916299 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.916416 kubelet[3069]: E0517 00:30:01.916327 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.916781 kubelet[3069]: E0517 00:30:01.916721 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.916781 kubelet[3069]: W0517 00:30:01.916748 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.916781 kubelet[3069]: E0517 00:30:01.916776 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.917197 kubelet[3069]: E0517 00:30:01.917132 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.917197 kubelet[3069]: W0517 00:30:01.917152 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.917419 kubelet[3069]: E0517 00:30:01.917223 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.917561 kubelet[3069]: E0517 00:30:01.917502 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.917561 kubelet[3069]: W0517 00:30:01.917520 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.917561 kubelet[3069]: E0517 00:30:01.917557 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.917890 kubelet[3069]: E0517 00:30:01.917841 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.917890 kubelet[3069]: W0517 00:30:01.917863 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.918085 kubelet[3069]: E0517 00:30:01.917905 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.918260 kubelet[3069]: E0517 00:30:01.918225 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.918260 kubelet[3069]: W0517 00:30:01.918250 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.918561 kubelet[3069]: E0517 00:30:01.918319 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.918849 kubelet[3069]: E0517 00:30:01.918808 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.918849 kubelet[3069]: W0517 00:30:01.918839 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.919089 kubelet[3069]: E0517 00:30:01.918876 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.919355 kubelet[3069]: E0517 00:30:01.919329 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.919355 kubelet[3069]: W0517 00:30:01.919352 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.919622 kubelet[3069]: E0517 00:30:01.919404 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.919991 kubelet[3069]: E0517 00:30:01.919961 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.920090 kubelet[3069]: W0517 00:30:01.919993 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.920090 kubelet[3069]: E0517 00:30:01.920029 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.920548 kubelet[3069]: E0517 00:30:01.920498 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.920548 kubelet[3069]: W0517 00:30:01.920533 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.920945 kubelet[3069]: E0517 00:30:01.920626 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.921191 kubelet[3069]: E0517 00:30:01.921149 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.921191 kubelet[3069]: W0517 00:30:01.921181 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.921489 kubelet[3069]: E0517 00:30:01.921278 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.921736 kubelet[3069]: E0517 00:30:01.921698 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.921736 kubelet[3069]: W0517 00:30:01.921727 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.922021 kubelet[3069]: E0517 00:30:01.921835 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.922230 kubelet[3069]: E0517 00:30:01.922190 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.922230 kubelet[3069]: W0517 00:30:01.922219 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.922531 kubelet[3069]: E0517 00:30:01.922328 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.922753 kubelet[3069]: E0517 00:30:01.922715 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.922753 kubelet[3069]: W0517 00:30:01.922744 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.923086 kubelet[3069]: E0517 00:30:01.922787 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.923547 kubelet[3069]: E0517 00:30:01.923491 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.923547 kubelet[3069]: W0517 00:30:01.923533 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.923788 kubelet[3069]: E0517 00:30:01.923585 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.924137 kubelet[3069]: E0517 00:30:01.924107 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.924298 kubelet[3069]: W0517 00:30:01.924141 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.924298 kubelet[3069]: E0517 00:30:01.924223 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.924850 kubelet[3069]: E0517 00:30:01.924817 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.924976 kubelet[3069]: W0517 00:30:01.924851 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.924976 kubelet[3069]: E0517 00:30:01.924937 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.925464 kubelet[3069]: E0517 00:30:01.925420 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.925464 kubelet[3069]: W0517 00:30:01.925455 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.925736 kubelet[3069]: E0517 00:30:01.925537 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.926107 kubelet[3069]: E0517 00:30:01.926067 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.926107 kubelet[3069]: W0517 00:30:01.926099 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.926358 kubelet[3069]: E0517 00:30:01.926140 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:01.943282 kubelet[3069]: E0517 00:30:01.943229 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:01.943282 kubelet[3069]: W0517 00:30:01.943278 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:01.943759 kubelet[3069]: E0517 00:30:01.943335 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:03.362007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1037458840.mount: Deactivated successfully. May 17 00:30:03.393042 kubelet[3069]: E0517 00:30:03.392957 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zdwc8" podUID="8e114f42-fc56-4b66-8d81-a37f65ab357c" May 17 00:30:03.965859 containerd[1808]: time="2025-05-17T00:30:03.965806652Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:03.966070 containerd[1808]: time="2025-05-17T00:30:03.966009956Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=35158669" May 17 00:30:03.966385 containerd[1808]: time="2025-05-17T00:30:03.966343040Z" level=info msg="ImageCreate event name:\"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:03.968111 containerd[1808]: time="2025-05-17T00:30:03.968066304Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:03.968461 containerd[1808]: time="2025-05-17T00:30:03.968421693Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"35158523\" in 2.340860234s" May 17 00:30:03.968461 containerd[1808]: time="2025-05-17T00:30:03.968438529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 17 00:30:03.968972 containerd[1808]: time="2025-05-17T00:30:03.968960268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 17 00:30:03.971874 containerd[1808]: time="2025-05-17T00:30:03.971852156Z" level=info msg="CreateContainer within sandbox \"7a9052dc6a83e0ba277b8100b566c885d0f8006246b776c758e05c467bab6191\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 17 00:30:03.976476 containerd[1808]: time="2025-05-17T00:30:03.976426241Z" level=info msg="CreateContainer within sandbox \"7a9052dc6a83e0ba277b8100b566c885d0f8006246b776c758e05c467bab6191\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"cedf07c57744690d6ef60be88cbd170424a19068f9d0445fe886a9ebf8359b8f\"" May 17 00:30:03.976730 containerd[1808]: time="2025-05-17T00:30:03.976672742Z" level=info msg="StartContainer for \"cedf07c57744690d6ef60be88cbd170424a19068f9d0445fe886a9ebf8359b8f\"" May 17 00:30:03.995551 systemd[1]: Started cri-containerd-cedf07c57744690d6ef60be88cbd170424a19068f9d0445fe886a9ebf8359b8f.scope - libcontainer container cedf07c57744690d6ef60be88cbd170424a19068f9d0445fe886a9ebf8359b8f. May 17 00:30:04.018181 containerd[1808]: time="2025-05-17T00:30:04.018159009Z" level=info msg="StartContainer for \"cedf07c57744690d6ef60be88cbd170424a19068f9d0445fe886a9ebf8359b8f\" returns successfully" May 17 00:30:04.469274 kubelet[3069]: I0517 00:30:04.469167 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6fd6bfb688-xjd6t" podStartSLOduration=1.12764003 podStartE2EDuration="3.469129814s" podCreationTimestamp="2025-05-17 00:30:01 +0000 UTC" firstStartedPulling="2025-05-17 00:30:01.62738187 +0000 UTC m=+16.275314812" lastFinishedPulling="2025-05-17 00:30:03.968871656 +0000 UTC m=+18.616804596" observedRunningTime="2025-05-17 00:30:04.468764509 +0000 UTC m=+19.116697520" watchObservedRunningTime="2025-05-17 00:30:04.469129814 +0000 UTC m=+19.117062806" May 17 00:30:04.530348 kubelet[3069]: E0517 00:30:04.530239 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:04.530348 kubelet[3069]: W0517 00:30:04.530291 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:04.530348 kubelet[3069]: E0517 00:30:04.530340 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:04.531129 kubelet[3069]: E0517 00:30:04.531050 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:04.531129 kubelet[3069]: W0517 00:30:04.531089 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:04.531129 kubelet[3069]: E0517 00:30:04.531127 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:04.531779 kubelet[3069]: E0517 00:30:04.531742 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:04.531910 kubelet[3069]: W0517 00:30:04.531782 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:04.531910 kubelet[3069]: E0517 00:30:04.531820 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:04.532564 kubelet[3069]: E0517 00:30:04.532481 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:04.532564 kubelet[3069]: W0517 00:30:04.532520 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:04.532564 kubelet[3069]: E0517 00:30:04.532556 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:04.533190 kubelet[3069]: E0517 00:30:04.533107 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:04.533190 kubelet[3069]: W0517 00:30:04.533145 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:04.533190 kubelet[3069]: E0517 00:30:04.533184 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:04.533894 kubelet[3069]: E0517 00:30:04.533811 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:04.533894 kubelet[3069]: W0517 00:30:04.533849 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:04.533894 kubelet[3069]: E0517 00:30:04.533887 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:04.534512 kubelet[3069]: E0517 00:30:04.534437 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:04.534512 kubelet[3069]: W0517 00:30:04.534476 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:04.534512 kubelet[3069]: E0517 00:30:04.534506 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:04.535141 kubelet[3069]: E0517 00:30:04.535103 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:04.535268 kubelet[3069]: W0517 00:30:04.535142 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:04.535268 kubelet[3069]: E0517 00:30:04.535178 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:04.535975 kubelet[3069]: E0517 00:30:04.535889 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:04.535975 kubelet[3069]: W0517 00:30:04.535972 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:04.536270 kubelet[3069]: E0517 00:30:04.536014 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:04.536621 kubelet[3069]: E0517 00:30:04.536539 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:04.536621 kubelet[3069]: W0517 00:30:04.536570 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:04.536944 kubelet[3069]: E0517 00:30:04.536634 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:04.537255 kubelet[3069]: E0517 00:30:04.537178 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:04.537255 kubelet[3069]: W0517 00:30:04.537216 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:04.537255 kubelet[3069]: E0517 00:30:04.537251 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:04.537919 kubelet[3069]: E0517 00:30:04.537840 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:04.537919 kubelet[3069]: W0517 00:30:04.537878 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:04.537919 kubelet[3069]: E0517 00:30:04.537914 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:04.538524 kubelet[3069]: E0517 00:30:04.538443 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:04.538524 kubelet[3069]: W0517 00:30:04.538471 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:04.538524 kubelet[3069]: E0517 00:30:04.538498 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:04.539121 kubelet[3069]: E0517 00:30:04.539036 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:04.539121 kubelet[3069]: W0517 00:30:04.539078 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:04.539121 kubelet[3069]: E0517 00:30:04.539118 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:04.539775 kubelet[3069]: E0517 00:30:04.539687 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:04.539775 kubelet[3069]: W0517 00:30:04.539726 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:04.539775 kubelet[3069]: E0517 00:30:04.539763 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:04.540656 kubelet[3069]: E0517 00:30:04.540618 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:04.540798 kubelet[3069]: W0517 00:30:04.540657 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:04.540798 kubelet[3069]: E0517 00:30:04.540696 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:04.541326 kubelet[3069]: E0517 00:30:04.541281 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:04.541326 kubelet[3069]: W0517 00:30:04.541316 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:04.541664 kubelet[3069]: E0517 00:30:04.541356 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:04.543726 kubelet[3069]: E0517 00:30:04.543172 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:04.543726 kubelet[3069]: W0517 00:30:04.543262 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:04.543726 kubelet[3069]: E0517 00:30:04.543362 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:04.545158 kubelet[3069]: E0517 00:30:04.544831 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:04.545158 kubelet[3069]: W0517 00:30:04.544883 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:04.545601 kubelet[3069]: E0517 00:30:04.545220 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:04.546029 kubelet[3069]: E0517 00:30:04.545969 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:04.546250 kubelet[3069]: W0517 00:30:04.546027 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:04.546462 kubelet[3069]: E0517 00:30:04.546219 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:04.547519 kubelet[3069]: E0517 00:30:04.547465 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:04.547698 kubelet[3069]: W0517 00:30:04.547521 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:04.547698 kubelet[3069]: E0517 00:30:04.547579 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:04.548258 kubelet[3069]: E0517 00:30:04.548203 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:04.548433 kubelet[3069]: W0517 00:30:04.548254 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:04.548433 kubelet[3069]: E0517 00:30:04.548361 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:04.548922 kubelet[3069]: E0517 00:30:04.548882 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:04.549070 kubelet[3069]: W0517 00:30:04.548923 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:04.549212 kubelet[3069]: E0517 00:30:04.549054 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:04.549571 kubelet[3069]: E0517 00:30:04.549537 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:04.549571 kubelet[3069]: W0517 00:30:04.549566 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:04.549863 kubelet[3069]: E0517 00:30:04.549689 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:04.550182 kubelet[3069]: E0517 00:30:04.550143 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:04.550350 kubelet[3069]: W0517 00:30:04.550181 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:04.550350 kubelet[3069]: E0517 00:30:04.550307 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:04.550872 kubelet[3069]: E0517 00:30:04.550831 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:04.551022 kubelet[3069]: W0517 00:30:04.550869 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:04.551022 kubelet[3069]: E0517 00:30:04.550953 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:04.551394 kubelet[3069]: E0517 00:30:04.551343 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:04.551511 kubelet[3069]: W0517 00:30:04.551407 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:04.551511 kubelet[3069]: E0517 00:30:04.551461 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:04.552304 kubelet[3069]: E0517 00:30:04.552221 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:04.552304 kubelet[3069]: W0517 00:30:04.552259 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:04.552304 kubelet[3069]: E0517 00:30:04.552302 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:04.553007 kubelet[3069]: E0517 00:30:04.552932 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:04.553007 kubelet[3069]: W0517 00:30:04.552969 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:04.553310 kubelet[3069]: E0517 00:30:04.553087 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:04.553675 kubelet[3069]: E0517 00:30:04.553592 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:04.553675 kubelet[3069]: W0517 00:30:04.553629 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:04.554050 kubelet[3069]: E0517 00:30:04.553772 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:04.554165 kubelet[3069]: E0517 00:30:04.554143 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:04.554260 kubelet[3069]: W0517 00:30:04.554171 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:04.554260 kubelet[3069]: E0517 00:30:04.554231 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:04.554743 kubelet[3069]: E0517 00:30:04.554674 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:04.554743 kubelet[3069]: W0517 00:30:04.554713 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:04.555182 kubelet[3069]: E0517 00:30:04.554767 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:04.555320 kubelet[3069]: E0517 00:30:04.555291 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:04.555453 kubelet[3069]: W0517 00:30:04.555316 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:04.555453 kubelet[3069]: E0517 00:30:04.555345 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:05.393201 kubelet[3069]: E0517 00:30:05.393107 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zdwc8" podUID="8e114f42-fc56-4b66-8d81-a37f65ab357c" May 17 00:30:05.444829 kubelet[3069]: I0517 00:30:05.444783 3069 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:30:05.447299 kubelet[3069]: E0517 00:30:05.447207 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:05.447299 kubelet[3069]: W0517 00:30:05.447243 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:05.447299 kubelet[3069]: E0517 00:30:05.447281 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:05.447945 kubelet[3069]: E0517 00:30:05.447856 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:05.447945 kubelet[3069]: W0517 00:30:05.447883 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:05.447945 kubelet[3069]: E0517 00:30:05.447911 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:05.448558 kubelet[3069]: E0517 00:30:05.448481 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:05.448558 kubelet[3069]: W0517 00:30:05.448519 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:05.448558 kubelet[3069]: E0517 00:30:05.448559 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:05.449205 kubelet[3069]: E0517 00:30:05.449142 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:05.449205 kubelet[3069]: W0517 00:30:05.449180 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:05.449459 kubelet[3069]: E0517 00:30:05.449220 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:05.449938 kubelet[3069]: E0517 00:30:05.449861 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:05.449938 kubelet[3069]: W0517 00:30:05.449899 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:05.449938 kubelet[3069]: E0517 00:30:05.449935 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:05.450576 kubelet[3069]: E0517 00:30:05.450497 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:05.450576 kubelet[3069]: W0517 00:30:05.450526 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:05.450576 kubelet[3069]: E0517 00:30:05.450557 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:05.451170 kubelet[3069]: E0517 00:30:05.451108 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:05.451170 kubelet[3069]: W0517 00:30:05.451145 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:05.451432 kubelet[3069]: E0517 00:30:05.451186 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:05.451862 kubelet[3069]: E0517 00:30:05.451782 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:05.451862 kubelet[3069]: W0517 00:30:05.451819 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:05.451862 kubelet[3069]: E0517 00:30:05.451855 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:05.452502 kubelet[3069]: E0517 00:30:05.452420 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:05.452502 kubelet[3069]: W0517 00:30:05.452448 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:05.452502 kubelet[3069]: E0517 00:30:05.452478 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:05.453084 kubelet[3069]: E0517 00:30:05.453003 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:05.453084 kubelet[3069]: W0517 00:30:05.453031 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:05.453084 kubelet[3069]: E0517 00:30:05.453059 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:05.453620 kubelet[3069]: E0517 00:30:05.453546 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:05.453620 kubelet[3069]: W0517 00:30:05.453575 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:05.453620 kubelet[3069]: E0517 00:30:05.453601 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:05.454229 kubelet[3069]: E0517 00:30:05.454141 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:05.454229 kubelet[3069]: W0517 00:30:05.454179 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:05.454229 kubelet[3069]: E0517 00:30:05.454214 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:05.454923 kubelet[3069]: E0517 00:30:05.454837 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:05.454923 kubelet[3069]: W0517 00:30:05.454874 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:05.454923 kubelet[3069]: E0517 00:30:05.454910 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:05.455558 kubelet[3069]: E0517 00:30:05.455515 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:05.455558 kubelet[3069]: W0517 00:30:05.455556 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:05.455853 kubelet[3069]: E0517 00:30:05.455592 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:05.456206 kubelet[3069]: E0517 00:30:05.456169 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:05.456314 kubelet[3069]: W0517 00:30:05.456208 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:05.456314 kubelet[3069]: E0517 00:30:05.456248 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:05.457115 kubelet[3069]: E0517 00:30:05.457028 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:05.457115 kubelet[3069]: W0517 00:30:05.457066 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:05.457115 kubelet[3069]: E0517 00:30:05.457101 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:05.457843 kubelet[3069]: E0517 00:30:05.457758 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:05.457843 kubelet[3069]: W0517 00:30:05.457797 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:05.457843 kubelet[3069]: E0517 00:30:05.457840 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:05.458564 kubelet[3069]: E0517 00:30:05.458486 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:05.458564 kubelet[3069]: W0517 00:30:05.458522 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:05.458861 kubelet[3069]: E0517 00:30:05.458573 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:05.459250 kubelet[3069]: E0517 00:30:05.459172 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:05.459250 kubelet[3069]: W0517 00:30:05.459211 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:05.459250 kubelet[3069]: E0517 00:30:05.459256 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:05.459964 kubelet[3069]: E0517 00:30:05.459880 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:05.459964 kubelet[3069]: W0517 00:30:05.459920 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:05.460278 kubelet[3069]: E0517 00:30:05.460014 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:05.460565 kubelet[3069]: E0517 00:30:05.460483 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:05.460565 kubelet[3069]: W0517 00:30:05.460518 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:05.460829 kubelet[3069]: E0517 00:30:05.460633 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:05.461194 kubelet[3069]: E0517 00:30:05.461107 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:05.461194 kubelet[3069]: W0517 00:30:05.461149 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:05.461545 kubelet[3069]: E0517 00:30:05.461236 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:05.461944 kubelet[3069]: E0517 00:30:05.461858 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:05.461944 kubelet[3069]: W0517 00:30:05.461899 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:05.462235 kubelet[3069]: E0517 00:30:05.462005 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:05.462568 kubelet[3069]: E0517 00:30:05.462487 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:05.462568 kubelet[3069]: W0517 00:30:05.462517 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:05.462568 kubelet[3069]: E0517 00:30:05.462560 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:05.463121 kubelet[3069]: E0517 00:30:05.463042 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:05.463121 kubelet[3069]: W0517 00:30:05.463071 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:05.463427 kubelet[3069]: E0517 00:30:05.463162 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:05.463666 kubelet[3069]: E0517 00:30:05.463590 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:05.463666 kubelet[3069]: W0517 00:30:05.463620 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:05.463666 kubelet[3069]: E0517 00:30:05.463657 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:05.464332 kubelet[3069]: E0517 00:30:05.464246 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:05.464332 kubelet[3069]: W0517 00:30:05.464284 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:05.464332 kubelet[3069]: E0517 00:30:05.464330 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:05.465043 kubelet[3069]: E0517 00:30:05.464964 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:05.465043 kubelet[3069]: W0517 00:30:05.465002 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:05.465332 kubelet[3069]: E0517 00:30:05.465096 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:05.465744 kubelet[3069]: E0517 00:30:05.465666 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:05.465744 kubelet[3069]: W0517 00:30:05.465706 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:05.466005 kubelet[3069]: E0517 00:30:05.465758 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:05.466463 kubelet[3069]: E0517 00:30:05.466353 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:05.466463 kubelet[3069]: W0517 00:30:05.466405 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:05.466463 kubelet[3069]: E0517 00:30:05.466443 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:05.467264 kubelet[3069]: E0517 00:30:05.467177 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:05.467264 kubelet[3069]: W0517 00:30:05.467216 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:05.467264 kubelet[3069]: E0517 00:30:05.467260 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:05.468004 kubelet[3069]: E0517 00:30:05.467924 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:05.468004 kubelet[3069]: W0517 00:30:05.467962 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:05.468004 kubelet[3069]: E0517 00:30:05.468007 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:05.468679 kubelet[3069]: E0517 00:30:05.468593 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:05.468679 kubelet[3069]: W0517 00:30:05.468631 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:05.468679 kubelet[3069]: E0517 00:30:05.468669 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:05.681991 containerd[1808]: time="2025-05-17T00:30:05.681910564Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:05.682244 containerd[1808]: time="2025-05-17T00:30:05.682084042Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4441619" May 17 00:30:05.682552 containerd[1808]: time="2025-05-17T00:30:05.682510973Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:05.683917 containerd[1808]: time="2025-05-17T00:30:05.683847154Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:05.685127 containerd[1808]: time="2025-05-17T00:30:05.685095366Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 1.716118442s" May 17 00:30:05.685177 containerd[1808]: time="2025-05-17T00:30:05.685133007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 17 00:30:05.686257 containerd[1808]: time="2025-05-17T00:30:05.686242138Z" level=info msg="CreateContainer within sandbox \"038315c108ea92147b162c291283800e77183ca74c89db069aa563441dde6615\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 17 00:30:05.691631 containerd[1808]: time="2025-05-17T00:30:05.691612024Z" level=info msg="CreateContainer within sandbox \"038315c108ea92147b162c291283800e77183ca74c89db069aa563441dde6615\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9ed48a7e4f7e57dddd7221ecffe699a835fb2811b93a775ad1f6970ce362f702\"" May 17 00:30:05.691963 containerd[1808]: time="2025-05-17T00:30:05.691922254Z" level=info msg="StartContainer for \"9ed48a7e4f7e57dddd7221ecffe699a835fb2811b93a775ad1f6970ce362f702\"" May 17 00:30:05.721591 systemd[1]: Started cri-containerd-9ed48a7e4f7e57dddd7221ecffe699a835fb2811b93a775ad1f6970ce362f702.scope - libcontainer container 9ed48a7e4f7e57dddd7221ecffe699a835fb2811b93a775ad1f6970ce362f702. May 17 00:30:05.749040 containerd[1808]: time="2025-05-17T00:30:05.748998193Z" level=info msg="StartContainer for \"9ed48a7e4f7e57dddd7221ecffe699a835fb2811b93a775ad1f6970ce362f702\" returns successfully" May 17 00:30:05.756067 systemd[1]: cri-containerd-9ed48a7e4f7e57dddd7221ecffe699a835fb2811b93a775ad1f6970ce362f702.scope: Deactivated successfully. May 17 00:30:05.976961 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ed48a7e4f7e57dddd7221ecffe699a835fb2811b93a775ad1f6970ce362f702-rootfs.mount: Deactivated successfully. May 17 00:30:06.214140 containerd[1808]: time="2025-05-17T00:30:06.214071640Z" level=info msg="shim disconnected" id=9ed48a7e4f7e57dddd7221ecffe699a835fb2811b93a775ad1f6970ce362f702 namespace=k8s.io May 17 00:30:06.214140 containerd[1808]: time="2025-05-17T00:30:06.214133991Z" level=warning msg="cleaning up after shim disconnected" id=9ed48a7e4f7e57dddd7221ecffe699a835fb2811b93a775ad1f6970ce362f702 namespace=k8s.io May 17 00:30:06.214140 containerd[1808]: time="2025-05-17T00:30:06.214143751Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:30:06.454116 containerd[1808]: time="2025-05-17T00:30:06.453864730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 17 00:30:07.392704 kubelet[3069]: E0517 00:30:07.392588 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zdwc8" podUID="8e114f42-fc56-4b66-8d81-a37f65ab357c" May 17 00:30:09.392747 kubelet[3069]: E0517 00:30:09.392662 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zdwc8" podUID="8e114f42-fc56-4b66-8d81-a37f65ab357c" May 17 00:30:10.494579 containerd[1808]: time="2025-05-17T00:30:10.494525568Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:10.494769 containerd[1808]: time="2025-05-17T00:30:10.494746364Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 17 00:30:10.495088 containerd[1808]: time="2025-05-17T00:30:10.495047104Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:10.497264 containerd[1808]: time="2025-05-17T00:30:10.497218575Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:10.497531 containerd[1808]: time="2025-05-17T00:30:10.497492496Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 4.043564284s" May 17 00:30:10.497531 containerd[1808]: time="2025-05-17T00:30:10.497506689Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 17 00:30:10.498568 containerd[1808]: time="2025-05-17T00:30:10.498555781Z" level=info msg="CreateContainer within sandbox \"038315c108ea92147b162c291283800e77183ca74c89db069aa563441dde6615\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 17 00:30:10.502969 containerd[1808]: time="2025-05-17T00:30:10.502923557Z" level=info msg="CreateContainer within sandbox \"038315c108ea92147b162c291283800e77183ca74c89db069aa563441dde6615\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"251ff904669a8151709bb4d3e1c17d11f09be21a03168f9d39228765eee1ec14\"" May 17 00:30:10.503152 containerd[1808]: time="2025-05-17T00:30:10.503138018Z" level=info msg="StartContainer for \"251ff904669a8151709bb4d3e1c17d11f09be21a03168f9d39228765eee1ec14\"" May 17 00:30:10.533547 systemd[1]: Started cri-containerd-251ff904669a8151709bb4d3e1c17d11f09be21a03168f9d39228765eee1ec14.scope - libcontainer container 251ff904669a8151709bb4d3e1c17d11f09be21a03168f9d39228765eee1ec14. May 17 00:30:10.547619 containerd[1808]: time="2025-05-17T00:30:10.547593834Z" level=info msg="StartContainer for \"251ff904669a8151709bb4d3e1c17d11f09be21a03168f9d39228765eee1ec14\" returns successfully" May 17 00:30:11.153663 systemd[1]: cri-containerd-251ff904669a8151709bb4d3e1c17d11f09be21a03168f9d39228765eee1ec14.scope: Deactivated successfully. May 17 00:30:11.205131 kubelet[3069]: I0517 00:30:11.205104 3069 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 17 00:30:11.231971 systemd[1]: Created slice kubepods-besteffort-pod93f363c3_f873_4d07_a216_fd2fe6414a28.slice - libcontainer container kubepods-besteffort-pod93f363c3_f873_4d07_a216_fd2fe6414a28.slice. May 17 00:30:11.238572 systemd[1]: Created slice kubepods-burstable-pod441d803b_df59_4a3e_b55c_43834c087e2b.slice - libcontainer container kubepods-burstable-pod441d803b_df59_4a3e_b55c_43834c087e2b.slice. May 17 00:30:11.245916 systemd[1]: Created slice kubepods-burstable-pod83bc9904_f910_4661_b26c_3bab2e3ff098.slice - libcontainer container kubepods-burstable-pod83bc9904_f910_4661_b26c_3bab2e3ff098.slice. May 17 00:30:11.254155 systemd[1]: Created slice kubepods-besteffort-pod7e5ca975_ef7f_414a_942f_bcd57dc8d07a.slice - libcontainer container kubepods-besteffort-pod7e5ca975_ef7f_414a_942f_bcd57dc8d07a.slice. May 17 00:30:11.261798 systemd[1]: Created slice kubepods-besteffort-pod79106d86_9187_44ab_a1d7_9ef14c711cf6.slice - libcontainer container kubepods-besteffort-pod79106d86_9187_44ab_a1d7_9ef14c711cf6.slice. May 17 00:30:11.268856 systemd[1]: Created slice kubepods-besteffort-pod4337522a_e2a1_45a2_9102_0c0650fe6aee.slice - libcontainer container kubepods-besteffort-pod4337522a_e2a1_45a2_9102_0c0650fe6aee.slice. May 17 00:30:11.275951 systemd[1]: Created slice kubepods-besteffort-pod47a8b75a_0ae5_4d27_9954_209696bc0aa7.slice - libcontainer container kubepods-besteffort-pod47a8b75a_0ae5_4d27_9954_209696bc0aa7.slice. May 17 00:30:11.303613 kubelet[3069]: I0517 00:30:11.303576 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29nzf\" (UniqueName: \"kubernetes.io/projected/93f363c3-f873-4d07-a216-fd2fe6414a28-kube-api-access-29nzf\") pod \"calico-apiserver-688cf69547-wc762\" (UID: \"93f363c3-f873-4d07-a216-fd2fe6414a28\") " pod="calico-apiserver/calico-apiserver-688cf69547-wc762" May 17 00:30:11.303762 kubelet[3069]: I0517 00:30:11.303618 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/47a8b75a-0ae5-4d27-9954-209696bc0aa7-calico-apiserver-certs\") pod \"calico-apiserver-688cf69547-z8c5c\" (UID: \"47a8b75a-0ae5-4d27-9954-209696bc0aa7\") " pod="calico-apiserver/calico-apiserver-688cf69547-z8c5c" May 17 00:30:11.303762 kubelet[3069]: I0517 00:30:11.303659 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e5ca975-ef7f-414a-942f-bcd57dc8d07a-tigera-ca-bundle\") pod \"calico-kube-controllers-c7b857f57-2t2zm\" (UID: \"7e5ca975-ef7f-414a-942f-bcd57dc8d07a\") " pod="calico-system/calico-kube-controllers-c7b857f57-2t2zm" May 17 00:30:11.303762 kubelet[3069]: I0517 00:30:11.303681 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79106d86-9187-44ab-a1d7-9ef14c711cf6-goldmane-ca-bundle\") pod \"goldmane-8f77d7b6c-st7xr\" (UID: \"79106d86-9187-44ab-a1d7-9ef14c711cf6\") " pod="calico-system/goldmane-8f77d7b6c-st7xr" May 17 00:30:11.303762 kubelet[3069]: I0517 00:30:11.303701 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4337522a-e2a1-45a2-9102-0c0650fe6aee-whisker-ca-bundle\") pod \"whisker-78ff969995-tfnzx\" (UID: \"4337522a-e2a1-45a2-9102-0c0650fe6aee\") " pod="calico-system/whisker-78ff969995-tfnzx" May 17 00:30:11.303762 kubelet[3069]: I0517 00:30:11.303734 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79106d86-9187-44ab-a1d7-9ef14c711cf6-config\") pod \"goldmane-8f77d7b6c-st7xr\" (UID: \"79106d86-9187-44ab-a1d7-9ef14c711cf6\") " pod="calico-system/goldmane-8f77d7b6c-st7xr" May 17 00:30:11.304002 kubelet[3069]: I0517 00:30:11.303759 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2rhp\" (UniqueName: \"kubernetes.io/projected/83bc9904-f910-4661-b26c-3bab2e3ff098-kube-api-access-c2rhp\") pod \"coredns-7c65d6cfc9-kbg69\" (UID: \"83bc9904-f910-4661-b26c-3bab2e3ff098\") " pod="kube-system/coredns-7c65d6cfc9-kbg69" May 17 00:30:11.304002 kubelet[3069]: I0517 00:30:11.303786 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/441d803b-df59-4a3e-b55c-43834c087e2b-config-volume\") pod \"coredns-7c65d6cfc9-2nkhg\" (UID: \"441d803b-df59-4a3e-b55c-43834c087e2b\") " pod="kube-system/coredns-7c65d6cfc9-2nkhg" May 17 00:30:11.304002 kubelet[3069]: I0517 00:30:11.303844 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/79106d86-9187-44ab-a1d7-9ef14c711cf6-goldmane-key-pair\") pod \"goldmane-8f77d7b6c-st7xr\" (UID: \"79106d86-9187-44ab-a1d7-9ef14c711cf6\") " pod="calico-system/goldmane-8f77d7b6c-st7xr" May 17 00:30:11.304002 kubelet[3069]: I0517 00:30:11.303899 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6kf7\" (UniqueName: \"kubernetes.io/projected/79106d86-9187-44ab-a1d7-9ef14c711cf6-kube-api-access-s6kf7\") pod \"goldmane-8f77d7b6c-st7xr\" (UID: \"79106d86-9187-44ab-a1d7-9ef14c711cf6\") " pod="calico-system/goldmane-8f77d7b6c-st7xr" May 17 00:30:11.304002 kubelet[3069]: I0517 00:30:11.303932 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4337522a-e2a1-45a2-9102-0c0650fe6aee-whisker-backend-key-pair\") pod \"whisker-78ff969995-tfnzx\" (UID: \"4337522a-e2a1-45a2-9102-0c0650fe6aee\") " pod="calico-system/whisker-78ff969995-tfnzx" May 17 00:30:11.304239 kubelet[3069]: I0517 00:30:11.303961 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/93f363c3-f873-4d07-a216-fd2fe6414a28-calico-apiserver-certs\") pod \"calico-apiserver-688cf69547-wc762\" (UID: \"93f363c3-f873-4d07-a216-fd2fe6414a28\") " pod="calico-apiserver/calico-apiserver-688cf69547-wc762" May 17 00:30:11.304239 kubelet[3069]: I0517 00:30:11.303985 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83bc9904-f910-4661-b26c-3bab2e3ff098-config-volume\") pod \"coredns-7c65d6cfc9-kbg69\" (UID: \"83bc9904-f910-4661-b26c-3bab2e3ff098\") " pod="kube-system/coredns-7c65d6cfc9-kbg69" May 17 00:30:11.304239 kubelet[3069]: I0517 00:30:11.304016 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt8rf\" (UniqueName: \"kubernetes.io/projected/47a8b75a-0ae5-4d27-9954-209696bc0aa7-kube-api-access-dt8rf\") pod \"calico-apiserver-688cf69547-z8c5c\" (UID: \"47a8b75a-0ae5-4d27-9954-209696bc0aa7\") " pod="calico-apiserver/calico-apiserver-688cf69547-z8c5c" May 17 00:30:11.304239 kubelet[3069]: I0517 00:30:11.304037 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7p7t\" (UniqueName: \"kubernetes.io/projected/4337522a-e2a1-45a2-9102-0c0650fe6aee-kube-api-access-l7p7t\") pod \"whisker-78ff969995-tfnzx\" (UID: \"4337522a-e2a1-45a2-9102-0c0650fe6aee\") " pod="calico-system/whisker-78ff969995-tfnzx" May 17 00:30:11.304239 kubelet[3069]: I0517 00:30:11.304067 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjxq7\" (UniqueName: \"kubernetes.io/projected/7e5ca975-ef7f-414a-942f-bcd57dc8d07a-kube-api-access-kjxq7\") pod \"calico-kube-controllers-c7b857f57-2t2zm\" (UID: \"7e5ca975-ef7f-414a-942f-bcd57dc8d07a\") " pod="calico-system/calico-kube-controllers-c7b857f57-2t2zm" May 17 00:30:11.304506 kubelet[3069]: I0517 00:30:11.304094 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j99h4\" (UniqueName: \"kubernetes.io/projected/441d803b-df59-4a3e-b55c-43834c087e2b-kube-api-access-j99h4\") pod \"coredns-7c65d6cfc9-2nkhg\" (UID: \"441d803b-df59-4a3e-b55c-43834c087e2b\") " pod="kube-system/coredns-7c65d6cfc9-2nkhg" May 17 00:30:11.410018 systemd[1]: Created slice kubepods-besteffort-pod8e114f42_fc56_4b66_8d81_a37f65ab357c.slice - libcontainer container kubepods-besteffort-pod8e114f42_fc56_4b66_8d81_a37f65ab357c.slice. May 17 00:30:11.432268 containerd[1808]: time="2025-05-17T00:30:11.432229835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zdwc8,Uid:8e114f42-fc56-4b66-8d81-a37f65ab357c,Namespace:calico-system,Attempt:0,}" May 17 00:30:11.514678 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-251ff904669a8151709bb4d3e1c17d11f09be21a03168f9d39228765eee1ec14-rootfs.mount: Deactivated successfully. May 17 00:30:11.536021 containerd[1808]: time="2025-05-17T00:30:11.535988822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-688cf69547-wc762,Uid:93f363c3-f873-4d07-a216-fd2fe6414a28,Namespace:calico-apiserver,Attempt:0,}" May 17 00:30:11.542819 containerd[1808]: time="2025-05-17T00:30:11.542775745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2nkhg,Uid:441d803b-df59-4a3e-b55c-43834c087e2b,Namespace:kube-system,Attempt:0,}" May 17 00:30:11.550224 containerd[1808]: time="2025-05-17T00:30:11.550115239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kbg69,Uid:83bc9904-f910-4661-b26c-3bab2e3ff098,Namespace:kube-system,Attempt:0,}" May 17 00:30:11.555968 containerd[1808]: time="2025-05-17T00:30:11.555942282Z" level=info msg="shim disconnected" id=251ff904669a8151709bb4d3e1c17d11f09be21a03168f9d39228765eee1ec14 namespace=k8s.io May 17 00:30:11.555968 containerd[1808]: time="2025-05-17T00:30:11.555965879Z" level=warning msg="cleaning up after shim disconnected" id=251ff904669a8151709bb4d3e1c17d11f09be21a03168f9d39228765eee1ec14 namespace=k8s.io May 17 00:30:11.556037 containerd[1808]: time="2025-05-17T00:30:11.555971822Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:30:11.557432 containerd[1808]: time="2025-05-17T00:30:11.557416401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c7b857f57-2t2zm,Uid:7e5ca975-ef7f-414a-942f-bcd57dc8d07a,Namespace:calico-system,Attempt:0,}" May 17 00:30:11.565973 containerd[1808]: time="2025-05-17T00:30:11.565949630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-st7xr,Uid:79106d86-9187-44ab-a1d7-9ef14c711cf6,Namespace:calico-system,Attempt:0,}" May 17 00:30:11.572468 containerd[1808]: time="2025-05-17T00:30:11.572436802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78ff969995-tfnzx,Uid:4337522a-e2a1-45a2-9102-0c0650fe6aee,Namespace:calico-system,Attempt:0,}" May 17 00:30:11.579265 containerd[1808]: time="2025-05-17T00:30:11.579228544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-688cf69547-z8c5c,Uid:47a8b75a-0ae5-4d27-9954-209696bc0aa7,Namespace:calico-apiserver,Attempt:0,}" May 17 00:30:11.591878 containerd[1808]: time="2025-05-17T00:30:11.591845223Z" level=error msg="Failed to destroy network for sandbox \"746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:11.592126 containerd[1808]: time="2025-05-17T00:30:11.592105864Z" level=error msg="encountered an error cleaning up failed sandbox \"746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:11.592175 containerd[1808]: time="2025-05-17T00:30:11.592146165Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zdwc8,Uid:8e114f42-fc56-4b66-8d81-a37f65ab357c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:11.592392 kubelet[3069]: E0517 00:30:11.592345 3069 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:11.592456 kubelet[3069]: E0517 00:30:11.592434 3069 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zdwc8" May 17 00:30:11.592485 kubelet[3069]: E0517 00:30:11.592456 3069 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zdwc8" May 17 00:30:11.592523 kubelet[3069]: E0517 00:30:11.592502 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zdwc8_calico-system(8e114f42-fc56-4b66-8d81-a37f65ab357c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zdwc8_calico-system(8e114f42-fc56-4b66-8d81-a37f65ab357c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zdwc8" podUID="8e114f42-fc56-4b66-8d81-a37f65ab357c" May 17 00:30:11.593755 containerd[1808]: time="2025-05-17T00:30:11.593713866Z" level=error msg="Failed to destroy network for sandbox \"3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:11.593891 containerd[1808]: time="2025-05-17T00:30:11.593871039Z" level=error msg="Failed to destroy network for sandbox \"561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:11.593956 containerd[1808]: time="2025-05-17T00:30:11.593942632Z" level=error msg="encountered an error cleaning up failed sandbox \"3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:11.593988 containerd[1808]: time="2025-05-17T00:30:11.593976807Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-688cf69547-wc762,Uid:93f363c3-f873-4d07-a216-fd2fe6414a28,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:11.594053 containerd[1808]: time="2025-05-17T00:30:11.594038664Z" level=error msg="encountered an error cleaning up failed sandbox \"561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:11.594080 containerd[1808]: time="2025-05-17T00:30:11.594066199Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2nkhg,Uid:441d803b-df59-4a3e-b55c-43834c087e2b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:11.594153 kubelet[3069]: E0517 00:30:11.594125 3069 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:11.594191 kubelet[3069]: E0517 00:30:11.594179 3069 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-688cf69547-wc762" May 17 00:30:11.594219 kubelet[3069]: E0517 00:30:11.594198 3069 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-688cf69547-wc762" May 17 00:30:11.594219 kubelet[3069]: E0517 00:30:11.594138 3069 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:11.594288 kubelet[3069]: E0517 00:30:11.594229 3069 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-2nkhg" May 17 00:30:11.594288 kubelet[3069]: E0517 00:30:11.594234 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-688cf69547-wc762_calico-apiserver(93f363c3-f873-4d07-a216-fd2fe6414a28)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-688cf69547-wc762_calico-apiserver(93f363c3-f873-4d07-a216-fd2fe6414a28)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-688cf69547-wc762" podUID="93f363c3-f873-4d07-a216-fd2fe6414a28" May 17 00:30:11.594288 kubelet[3069]: E0517 00:30:11.594242 3069 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-2nkhg" May 17 00:30:11.594416 kubelet[3069]: E0517 00:30:11.594261 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-2nkhg_kube-system(441d803b-df59-4a3e-b55c-43834c087e2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-2nkhg_kube-system(441d803b-df59-4a3e-b55c-43834c087e2b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-2nkhg" podUID="441d803b-df59-4a3e-b55c-43834c087e2b" May 17 00:30:11.594453 containerd[1808]: time="2025-05-17T00:30:11.594330004Z" level=error msg="Failed to destroy network for sandbox \"3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:11.594480 containerd[1808]: time="2025-05-17T00:30:11.594466356Z" level=error msg="encountered an error cleaning up failed sandbox \"3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:11.594504 containerd[1808]: time="2025-05-17T00:30:11.594487997Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kbg69,Uid:83bc9904-f910-4661-b26c-3bab2e3ff098,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:11.594567 kubelet[3069]: E0517 00:30:11.594550 3069 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:11.594588 kubelet[3069]: E0517 00:30:11.594575 3069 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-kbg69" May 17 00:30:11.594608 kubelet[3069]: E0517 00:30:11.594586 3069 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-kbg69" May 17 00:30:11.594629 kubelet[3069]: E0517 00:30:11.594607 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-kbg69_kube-system(83bc9904-f910-4661-b26c-3bab2e3ff098)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-kbg69_kube-system(83bc9904-f910-4661-b26c-3bab2e3ff098)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-kbg69" podUID="83bc9904-f910-4661-b26c-3bab2e3ff098" May 17 00:30:11.598497 containerd[1808]: time="2025-05-17T00:30:11.598460342Z" level=error msg="Failed to destroy network for sandbox \"930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:11.598721 containerd[1808]: time="2025-05-17T00:30:11.598706367Z" level=error msg="encountered an error cleaning up failed sandbox \"930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:11.598770 containerd[1808]: time="2025-05-17T00:30:11.598737228Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c7b857f57-2t2zm,Uid:7e5ca975-ef7f-414a-942f-bcd57dc8d07a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:11.598897 kubelet[3069]: E0517 00:30:11.598867 3069 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:11.598946 kubelet[3069]: E0517 00:30:11.598920 3069 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c7b857f57-2t2zm" May 17 00:30:11.598983 kubelet[3069]: E0517 00:30:11.598941 3069 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c7b857f57-2t2zm" May 17 00:30:11.599017 kubelet[3069]: E0517 00:30:11.598984 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c7b857f57-2t2zm_calico-system(7e5ca975-ef7f-414a-942f-bcd57dc8d07a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c7b857f57-2t2zm_calico-system(7e5ca975-ef7f-414a-942f-bcd57dc8d07a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c7b857f57-2t2zm" podUID="7e5ca975-ef7f-414a-942f-bcd57dc8d07a" May 17 00:30:11.602883 containerd[1808]: time="2025-05-17T00:30:11.602851354Z" level=error msg="Failed to destroy network for sandbox \"216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:11.603051 containerd[1808]: time="2025-05-17T00:30:11.603037493Z" level=error msg="encountered an error cleaning up failed sandbox \"216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:11.603083 containerd[1808]: time="2025-05-17T00:30:11.603072326Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-st7xr,Uid:79106d86-9187-44ab-a1d7-9ef14c711cf6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:11.603195 kubelet[3069]: E0517 00:30:11.603176 3069 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:11.603229 kubelet[3069]: E0517 00:30:11.603210 3069 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-st7xr" May 17 00:30:11.603229 kubelet[3069]: E0517 00:30:11.603222 3069 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-st7xr" May 17 00:30:11.603271 kubelet[3069]: E0517 00:30:11.603249 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-8f77d7b6c-st7xr_calico-system(79106d86-9187-44ab-a1d7-9ef14c711cf6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-8f77d7b6c-st7xr_calico-system(79106d86-9187-44ab-a1d7-9ef14c711cf6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-st7xr" podUID="79106d86-9187-44ab-a1d7-9ef14c711cf6" May 17 00:30:11.604121 containerd[1808]: time="2025-05-17T00:30:11.604102092Z" level=error msg="Failed to destroy network for sandbox \"17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:11.604261 containerd[1808]: time="2025-05-17T00:30:11.604249321Z" level=error msg="encountered an error cleaning up failed sandbox \"17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:11.604290 containerd[1808]: time="2025-05-17T00:30:11.604273385Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78ff969995-tfnzx,Uid:4337522a-e2a1-45a2-9102-0c0650fe6aee,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:11.604381 kubelet[3069]: E0517 00:30:11.604361 3069 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:11.604405 kubelet[3069]: E0517 00:30:11.604390 3069 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-78ff969995-tfnzx" May 17 00:30:11.604405 kubelet[3069]: E0517 00:30:11.604401 3069 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-78ff969995-tfnzx" May 17 00:30:11.604444 kubelet[3069]: E0517 00:30:11.604420 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-78ff969995-tfnzx_calico-system(4337522a-e2a1-45a2-9102-0c0650fe6aee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-78ff969995-tfnzx_calico-system(4337522a-e2a1-45a2-9102-0c0650fe6aee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-78ff969995-tfnzx" podUID="4337522a-e2a1-45a2-9102-0c0650fe6aee" May 17 00:30:11.610876 containerd[1808]: time="2025-05-17T00:30:11.610852114Z" level=error msg="Failed to destroy network for sandbox \"90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:11.611033 containerd[1808]: time="2025-05-17T00:30:11.611020581Z" level=error msg="encountered an error cleaning up failed sandbox \"90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:11.611057 containerd[1808]: time="2025-05-17T00:30:11.611047172Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-688cf69547-z8c5c,Uid:47a8b75a-0ae5-4d27-9954-209696bc0aa7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:11.611164 kubelet[3069]: E0517 00:30:11.611148 3069 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:11.611190 kubelet[3069]: E0517 00:30:11.611176 3069 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-688cf69547-z8c5c" May 17 00:30:11.611212 kubelet[3069]: E0517 00:30:11.611189 3069 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-688cf69547-z8c5c" May 17 00:30:11.611232 kubelet[3069]: E0517 00:30:11.611211 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-688cf69547-z8c5c_calico-apiserver(47a8b75a-0ae5-4d27-9954-209696bc0aa7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-688cf69547-z8c5c_calico-apiserver(47a8b75a-0ae5-4d27-9954-209696bc0aa7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-688cf69547-z8c5c" podUID="47a8b75a-0ae5-4d27-9954-209696bc0aa7" May 17 00:30:12.466895 kubelet[3069]: I0517 00:30:12.466829 3069 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74" May 17 00:30:12.468299 containerd[1808]: time="2025-05-17T00:30:12.468224792Z" level=info msg="StopPodSandbox for \"746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74\"" May 17 00:30:12.468809 containerd[1808]: time="2025-05-17T00:30:12.468746367Z" level=info msg="Ensure that sandbox 746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74 in task-service has been cleanup successfully" May 17 00:30:12.469112 kubelet[3069]: I0517 00:30:12.469058 3069 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce" May 17 00:30:12.470353 containerd[1808]: time="2025-05-17T00:30:12.470276741Z" level=info msg="StopPodSandbox for \"216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce\"" May 17 00:30:12.470973 containerd[1808]: time="2025-05-17T00:30:12.470868777Z" level=info msg="Ensure that sandbox 216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce in task-service has been cleanup successfully" May 17 00:30:12.471596 kubelet[3069]: I0517 00:30:12.471536 3069 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2" May 17 00:30:12.472846 containerd[1808]: time="2025-05-17T00:30:12.472772564Z" level=info msg="StopPodSandbox for \"561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2\"" May 17 00:30:12.473287 containerd[1808]: time="2025-05-17T00:30:12.473223539Z" level=info msg="Ensure that sandbox 561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2 in task-service has been cleanup successfully" May 17 00:30:12.475488 kubelet[3069]: I0517 00:30:12.475470 3069 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb" May 17 00:30:12.475616 containerd[1808]: time="2025-05-17T00:30:12.475600675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 17 00:30:12.475801 containerd[1808]: time="2025-05-17T00:30:12.475787165Z" level=info msg="StopPodSandbox for \"90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb\"" May 17 00:30:12.475896 containerd[1808]: time="2025-05-17T00:30:12.475886200Z" level=info msg="Ensure that sandbox 90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb in task-service has been cleanup successfully" May 17 00:30:12.475934 kubelet[3069]: I0517 00:30:12.475926 3069 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae" May 17 00:30:12.476245 containerd[1808]: time="2025-05-17T00:30:12.476225718Z" level=info msg="StopPodSandbox for \"3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae\"" May 17 00:30:12.476371 containerd[1808]: time="2025-05-17T00:30:12.476356654Z" level=info msg="Ensure that sandbox 3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae in task-service has been cleanup successfully" May 17 00:30:12.476476 kubelet[3069]: I0517 00:30:12.476462 3069 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681" May 17 00:30:12.476830 containerd[1808]: time="2025-05-17T00:30:12.476803882Z" level=info msg="StopPodSandbox for \"17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681\"" May 17 00:30:12.476950 containerd[1808]: time="2025-05-17T00:30:12.476938839Z" level=info msg="Ensure that sandbox 17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681 in task-service has been cleanup successfully" May 17 00:30:12.477019 kubelet[3069]: I0517 00:30:12.477002 3069 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7" May 17 00:30:12.477388 containerd[1808]: time="2025-05-17T00:30:12.477364274Z" level=info msg="StopPodSandbox for \"930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7\"" May 17 00:30:12.477585 containerd[1808]: time="2025-05-17T00:30:12.477569242Z" level=info msg="Ensure that sandbox 930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7 in task-service has been cleanup successfully" May 17 00:30:12.477647 kubelet[3069]: I0517 00:30:12.477583 3069 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2" May 17 00:30:12.478038 containerd[1808]: time="2025-05-17T00:30:12.478008293Z" level=info msg="StopPodSandbox for \"3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2\"" May 17 00:30:12.478394 containerd[1808]: time="2025-05-17T00:30:12.478210986Z" level=info msg="Ensure that sandbox 3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2 in task-service has been cleanup successfully" May 17 00:30:12.490303 containerd[1808]: time="2025-05-17T00:30:12.490243976Z" level=error msg="StopPodSandbox for \"216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce\" failed" error="failed to destroy network for sandbox \"216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:12.490459 kubelet[3069]: E0517 00:30:12.490427 3069 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce" May 17 00:30:12.490522 kubelet[3069]: E0517 00:30:12.490482 3069 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce"} May 17 00:30:12.490564 kubelet[3069]: E0517 00:30:12.490543 3069 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"79106d86-9187-44ab-a1d7-9ef14c711cf6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:30:12.490650 kubelet[3069]: E0517 00:30:12.490569 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"79106d86-9187-44ab-a1d7-9ef14c711cf6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-st7xr" podUID="79106d86-9187-44ab-a1d7-9ef14c711cf6" May 17 00:30:12.490925 containerd[1808]: time="2025-05-17T00:30:12.490886695Z" level=error msg="StopPodSandbox for \"561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2\" failed" error="failed to destroy network for sandbox \"561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:12.491209 kubelet[3069]: E0517 00:30:12.491183 3069 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2" May 17 00:30:12.491467 kubelet[3069]: E0517 00:30:12.491219 3069 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2"} May 17 00:30:12.491467 kubelet[3069]: E0517 00:30:12.491250 3069 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"441d803b-df59-4a3e-b55c-43834c087e2b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:30:12.491467 kubelet[3069]: E0517 00:30:12.491268 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"441d803b-df59-4a3e-b55c-43834c087e2b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-2nkhg" podUID="441d803b-df59-4a3e-b55c-43834c087e2b" May 17 00:30:12.491467 kubelet[3069]: E0517 00:30:12.491443 3069 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74" May 17 00:30:12.491594 containerd[1808]: time="2025-05-17T00:30:12.491348617Z" level=error msg="StopPodSandbox for \"746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74\" failed" error="failed to destroy network for sandbox \"746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:12.491623 kubelet[3069]: E0517 00:30:12.491463 3069 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74"} May 17 00:30:12.491623 kubelet[3069]: E0517 00:30:12.491490 3069 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8e114f42-fc56-4b66-8d81-a37f65ab357c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:30:12.491623 kubelet[3069]: E0517 00:30:12.491502 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8e114f42-fc56-4b66-8d81-a37f65ab357c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zdwc8" podUID="8e114f42-fc56-4b66-8d81-a37f65ab357c" May 17 00:30:12.492490 containerd[1808]: time="2025-05-17T00:30:12.492442686Z" level=error msg="StopPodSandbox for \"3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae\" failed" error="failed to destroy network for sandbox \"3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:12.492551 kubelet[3069]: E0517 00:30:12.492533 3069 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae" May 17 00:30:12.492587 kubelet[3069]: E0517 00:30:12.492558 3069 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae"} May 17 00:30:12.492587 kubelet[3069]: E0517 00:30:12.492577 3069 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"83bc9904-f910-4661-b26c-3bab2e3ff098\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:30:12.492643 kubelet[3069]: E0517 00:30:12.492591 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"83bc9904-f910-4661-b26c-3bab2e3ff098\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-kbg69" podUID="83bc9904-f910-4661-b26c-3bab2e3ff098" May 17 00:30:12.492732 containerd[1808]: time="2025-05-17T00:30:12.492717562Z" level=error msg="StopPodSandbox for \"90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb\" failed" error="failed to destroy network for sandbox \"90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:12.492795 kubelet[3069]: E0517 00:30:12.492781 3069 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb" May 17 00:30:12.492823 kubelet[3069]: E0517 00:30:12.492802 3069 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb"} May 17 00:30:12.492823 kubelet[3069]: E0517 00:30:12.492819 3069 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"47a8b75a-0ae5-4d27-9954-209696bc0aa7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:30:12.492872 kubelet[3069]: E0517 00:30:12.492833 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"47a8b75a-0ae5-4d27-9954-209696bc0aa7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-688cf69547-z8c5c" podUID="47a8b75a-0ae5-4d27-9954-209696bc0aa7" May 17 00:30:12.493069 containerd[1808]: time="2025-05-17T00:30:12.493050014Z" level=error msg="StopPodSandbox for \"3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2\" failed" error="failed to destroy network for sandbox \"3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:12.493140 kubelet[3069]: E0517 00:30:12.493129 3069 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2" May 17 00:30:12.493163 kubelet[3069]: E0517 00:30:12.493144 3069 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2"} May 17 00:30:12.493163 kubelet[3069]: E0517 00:30:12.493156 3069 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"93f363c3-f873-4d07-a216-fd2fe6414a28\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:30:12.493211 kubelet[3069]: E0517 00:30:12.493167 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"93f363c3-f873-4d07-a216-fd2fe6414a28\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-688cf69547-wc762" podUID="93f363c3-f873-4d07-a216-fd2fe6414a28" May 17 00:30:12.493399 containerd[1808]: time="2025-05-17T00:30:12.493385266Z" level=error msg="StopPodSandbox for \"930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7\" failed" error="failed to destroy network for sandbox \"930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:12.493501 kubelet[3069]: E0517 00:30:12.493445 3069 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7" May 17 00:30:12.493522 kubelet[3069]: E0517 00:30:12.493507 3069 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7"} May 17 00:30:12.493558 kubelet[3069]: E0517 00:30:12.493522 3069 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7e5ca975-ef7f-414a-942f-bcd57dc8d07a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:30:12.493558 kubelet[3069]: E0517 00:30:12.493549 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7e5ca975-ef7f-414a-942f-bcd57dc8d07a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c7b857f57-2t2zm" podUID="7e5ca975-ef7f-414a-942f-bcd57dc8d07a" May 17 00:30:12.493692 containerd[1808]: time="2025-05-17T00:30:12.493656012Z" level=error msg="StopPodSandbox for \"17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681\" failed" error="failed to destroy network for sandbox \"17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:12.493794 kubelet[3069]: E0517 00:30:12.493768 3069 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681" May 17 00:30:12.493831 kubelet[3069]: E0517 00:30:12.493796 3069 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681"} May 17 00:30:12.493831 kubelet[3069]: E0517 00:30:12.493825 3069 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4337522a-e2a1-45a2-9102-0c0650fe6aee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:30:12.493889 kubelet[3069]: E0517 00:30:12.493834 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4337522a-e2a1-45a2-9102-0c0650fe6aee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-78ff969995-tfnzx" podUID="4337522a-e2a1-45a2-9102-0c0650fe6aee" May 17 00:30:12.503447 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7-shm.mount: Deactivated successfully. May 17 00:30:12.503500 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae-shm.mount: Deactivated successfully. May 17 00:30:12.503535 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2-shm.mount: Deactivated successfully. May 17 00:30:12.503569 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2-shm.mount: Deactivated successfully. May 17 00:30:12.503600 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74-shm.mount: Deactivated successfully. May 17 00:30:18.297942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount977315888.mount: Deactivated successfully. May 17 00:30:18.315017 containerd[1808]: time="2025-05-17T00:30:18.314993129Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:18.315343 containerd[1808]: time="2025-05-17T00:30:18.315253846Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 17 00:30:18.315621 containerd[1808]: time="2025-05-17T00:30:18.315608810Z" level=info msg="ImageCreate event name:\"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:18.316505 containerd[1808]: time="2025-05-17T00:30:18.316494910Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:18.316856 containerd[1808]: time="2025-05-17T00:30:18.316842418Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"156396234\" in 5.84121738s" May 17 00:30:18.316883 containerd[1808]: time="2025-05-17T00:30:18.316860994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 17 00:30:18.320240 containerd[1808]: time="2025-05-17T00:30:18.320224279Z" level=info msg="CreateContainer within sandbox \"038315c108ea92147b162c291283800e77183ca74c89db069aa563441dde6615\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 17 00:30:18.325538 containerd[1808]: time="2025-05-17T00:30:18.325520871Z" level=info msg="CreateContainer within sandbox \"038315c108ea92147b162c291283800e77183ca74c89db069aa563441dde6615\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5361e7ee8e418b4084764a104db335b0c53071006b0bf837372ca3849fbfe8e4\"" May 17 00:30:18.325793 containerd[1808]: time="2025-05-17T00:30:18.325751756Z" level=info msg="StartContainer for \"5361e7ee8e418b4084764a104db335b0c53071006b0bf837372ca3849fbfe8e4\"" May 17 00:30:18.347535 systemd[1]: Started cri-containerd-5361e7ee8e418b4084764a104db335b0c53071006b0bf837372ca3849fbfe8e4.scope - libcontainer container 5361e7ee8e418b4084764a104db335b0c53071006b0bf837372ca3849fbfe8e4. May 17 00:30:18.362029 containerd[1808]: time="2025-05-17T00:30:18.362003124Z" level=info msg="StartContainer for \"5361e7ee8e418b4084764a104db335b0c53071006b0bf837372ca3849fbfe8e4\" returns successfully" May 17 00:30:18.439069 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 17 00:30:18.439125 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 17 00:30:18.475703 containerd[1808]: time="2025-05-17T00:30:18.475672659Z" level=info msg="StopPodSandbox for \"17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681\"" May 17 00:30:18.505283 kubelet[3069]: I0517 00:30:18.505199 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9njrx" podStartSLOduration=1.072497861 podStartE2EDuration="17.505172967s" podCreationTimestamp="2025-05-17 00:30:01 +0000 UTC" firstStartedPulling="2025-05-17 00:30:01.884538231 +0000 UTC m=+16.532471179" lastFinishedPulling="2025-05-17 00:30:18.317213339 +0000 UTC m=+32.965146285" observedRunningTime="2025-05-17 00:30:18.50493639 +0000 UTC m=+33.152869336" watchObservedRunningTime="2025-05-17 00:30:18.505172967 +0000 UTC m=+33.153105903" May 17 00:30:18.522243 containerd[1808]: 2025-05-17 00:30:18.505 [INFO][4666] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681" May 17 00:30:18.522243 containerd[1808]: 2025-05-17 00:30:18.505 [INFO][4666] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681" iface="eth0" netns="/var/run/netns/cni-c63f772c-655b-8769-5b49-dc7a1f5686a9" May 17 00:30:18.522243 containerd[1808]: 2025-05-17 00:30:18.505 [INFO][4666] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681" iface="eth0" netns="/var/run/netns/cni-c63f772c-655b-8769-5b49-dc7a1f5686a9" May 17 00:30:18.522243 containerd[1808]: 2025-05-17 00:30:18.505 [INFO][4666] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681" iface="eth0" netns="/var/run/netns/cni-c63f772c-655b-8769-5b49-dc7a1f5686a9" May 17 00:30:18.522243 containerd[1808]: 2025-05-17 00:30:18.505 [INFO][4666] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681" May 17 00:30:18.522243 containerd[1808]: 2025-05-17 00:30:18.505 [INFO][4666] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681" May 17 00:30:18.522243 containerd[1808]: 2025-05-17 00:30:18.515 [INFO][4708] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681" HandleID="k8s-pod-network.17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681" Workload="ci--4081.3.3--n--65a4af4639-k8s-whisker--78ff969995--tfnzx-eth0" May 17 00:30:18.522243 containerd[1808]: 2025-05-17 00:30:18.515 [INFO][4708] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:18.522243 containerd[1808]: 2025-05-17 00:30:18.515 [INFO][4708] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:18.522243 containerd[1808]: 2025-05-17 00:30:18.518 [WARNING][4708] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681" HandleID="k8s-pod-network.17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681" Workload="ci--4081.3.3--n--65a4af4639-k8s-whisker--78ff969995--tfnzx-eth0" May 17 00:30:18.522243 containerd[1808]: 2025-05-17 00:30:18.518 [INFO][4708] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681" HandleID="k8s-pod-network.17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681" Workload="ci--4081.3.3--n--65a4af4639-k8s-whisker--78ff969995--tfnzx-eth0" May 17 00:30:18.522243 containerd[1808]: 2025-05-17 00:30:18.519 [INFO][4708] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:18.522243 containerd[1808]: 2025-05-17 00:30:18.521 [INFO][4666] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681" May 17 00:30:18.522530 containerd[1808]: time="2025-05-17T00:30:18.522336501Z" level=info msg="TearDown network for sandbox \"17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681\" successfully" May 17 00:30:18.522530 containerd[1808]: time="2025-05-17T00:30:18.522356253Z" level=info msg="StopPodSandbox for \"17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681\" returns successfully" May 17 00:30:18.650045 kubelet[3069]: I0517 00:30:18.649809 3069 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4337522a-e2a1-45a2-9102-0c0650fe6aee-whisker-backend-key-pair\") pod \"4337522a-e2a1-45a2-9102-0c0650fe6aee\" (UID: \"4337522a-e2a1-45a2-9102-0c0650fe6aee\") " May 17 00:30:18.650045 kubelet[3069]: I0517 00:30:18.649947 3069 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4337522a-e2a1-45a2-9102-0c0650fe6aee-whisker-ca-bundle\") pod \"4337522a-e2a1-45a2-9102-0c0650fe6aee\" (UID: \"4337522a-e2a1-45a2-9102-0c0650fe6aee\") " May 17 00:30:18.650566 kubelet[3069]: I0517 00:30:18.650053 3069 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7p7t\" (UniqueName: \"kubernetes.io/projected/4337522a-e2a1-45a2-9102-0c0650fe6aee-kube-api-access-l7p7t\") pod \"4337522a-e2a1-45a2-9102-0c0650fe6aee\" (UID: \"4337522a-e2a1-45a2-9102-0c0650fe6aee\") " May 17 00:30:18.651151 kubelet[3069]: I0517 00:30:18.651067 3069 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4337522a-e2a1-45a2-9102-0c0650fe6aee-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "4337522a-e2a1-45a2-9102-0c0650fe6aee" (UID: "4337522a-e2a1-45a2-9102-0c0650fe6aee"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 00:30:18.655750 kubelet[3069]: I0517 00:30:18.655649 3069 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4337522a-e2a1-45a2-9102-0c0650fe6aee-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "4337522a-e2a1-45a2-9102-0c0650fe6aee" (UID: "4337522a-e2a1-45a2-9102-0c0650fe6aee"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:30:18.655939 kubelet[3069]: I0517 00:30:18.655844 3069 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4337522a-e2a1-45a2-9102-0c0650fe6aee-kube-api-access-l7p7t" (OuterVolumeSpecName: "kube-api-access-l7p7t") pod "4337522a-e2a1-45a2-9102-0c0650fe6aee" (UID: "4337522a-e2a1-45a2-9102-0c0650fe6aee"). InnerVolumeSpecName "kube-api-access-l7p7t". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:30:18.751409 kubelet[3069]: I0517 00:30:18.751282 3069 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4337522a-e2a1-45a2-9102-0c0650fe6aee-whisker-backend-key-pair\") on node \"ci-4081.3.3-n-65a4af4639\" DevicePath \"\"" May 17 00:30:18.751409 kubelet[3069]: I0517 00:30:18.751414 3069 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4337522a-e2a1-45a2-9102-0c0650fe6aee-whisker-ca-bundle\") on node \"ci-4081.3.3-n-65a4af4639\" DevicePath \"\"" May 17 00:30:18.751756 kubelet[3069]: I0517 00:30:18.751457 3069 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7p7t\" (UniqueName: \"kubernetes.io/projected/4337522a-e2a1-45a2-9102-0c0650fe6aee-kube-api-access-l7p7t\") on node \"ci-4081.3.3-n-65a4af4639\" DevicePath \"\"" May 17 00:30:19.302988 systemd[1]: run-netns-cni\x2dc63f772c\x2d655b\x2d8769\x2d5b49\x2ddc7a1f5686a9.mount: Deactivated successfully. May 17 00:30:19.303039 systemd[1]: var-lib-kubelet-pods-4337522a\x2de2a1\x2d45a2\x2d9102\x2d0c0650fe6aee-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl7p7t.mount: Deactivated successfully. May 17 00:30:19.303076 systemd[1]: var-lib-kubelet-pods-4337522a\x2de2a1\x2d45a2\x2d9102\x2d0c0650fe6aee-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 17 00:30:19.404274 systemd[1]: Removed slice kubepods-besteffort-pod4337522a_e2a1_45a2_9102_0c0650fe6aee.slice - libcontainer container kubepods-besteffort-pod4337522a_e2a1_45a2_9102_0c0650fe6aee.slice. May 17 00:30:19.534539 systemd[1]: Created slice kubepods-besteffort-podde7e5665_5768_4a36_925b_d749b053cd37.slice - libcontainer container kubepods-besteffort-podde7e5665_5768_4a36_925b_d749b053cd37.slice. May 17 00:30:19.659454 kubelet[3069]: I0517 00:30:19.659122 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzcj6\" (UniqueName: \"kubernetes.io/projected/de7e5665-5768-4a36-925b-d749b053cd37-kube-api-access-hzcj6\") pod \"whisker-66d7ff6d95-qgk6n\" (UID: \"de7e5665-5768-4a36-925b-d749b053cd37\") " pod="calico-system/whisker-66d7ff6d95-qgk6n" May 17 00:30:19.659454 kubelet[3069]: I0517 00:30:19.659271 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/de7e5665-5768-4a36-925b-d749b053cd37-whisker-backend-key-pair\") pod \"whisker-66d7ff6d95-qgk6n\" (UID: \"de7e5665-5768-4a36-925b-d749b053cd37\") " pod="calico-system/whisker-66d7ff6d95-qgk6n" May 17 00:30:19.659454 kubelet[3069]: I0517 00:30:19.659430 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de7e5665-5768-4a36-925b-d749b053cd37-whisker-ca-bundle\") pod \"whisker-66d7ff6d95-qgk6n\" (UID: \"de7e5665-5768-4a36-925b-d749b053cd37\") " pod="calico-system/whisker-66d7ff6d95-qgk6n" May 17 00:30:19.837768 containerd[1808]: time="2025-05-17T00:30:19.837644756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66d7ff6d95-qgk6n,Uid:de7e5665-5768-4a36-925b-d749b053cd37,Namespace:calico-system,Attempt:0,}" May 17 00:30:19.902167 systemd-networkd[1605]: caliba16bcd8fa1: Link UP May 17 00:30:19.902440 systemd-networkd[1605]: caliba16bcd8fa1: Gained carrier May 17 00:30:19.910529 containerd[1808]: 2025-05-17 00:30:19.854 [INFO][4923] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:30:19.910529 containerd[1808]: 2025-05-17 00:30:19.860 [INFO][4923] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--65a4af4639-k8s-whisker--66d7ff6d95--qgk6n-eth0 whisker-66d7ff6d95- calico-system de7e5665-5768-4a36-925b-d749b053cd37 860 0 2025-05-17 00:30:19 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:66d7ff6d95 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.3-n-65a4af4639 whisker-66d7ff6d95-qgk6n eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliba16bcd8fa1 [] [] }} ContainerID="6f161cc2b34debc8757958a2fb67cbc9a17504e8baa1f12ed880a33ae6433ee1" Namespace="calico-system" Pod="whisker-66d7ff6d95-qgk6n" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-whisker--66d7ff6d95--qgk6n-" May 17 00:30:19.910529 containerd[1808]: 2025-05-17 00:30:19.860 [INFO][4923] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6f161cc2b34debc8757958a2fb67cbc9a17504e8baa1f12ed880a33ae6433ee1" Namespace="calico-system" Pod="whisker-66d7ff6d95-qgk6n" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-whisker--66d7ff6d95--qgk6n-eth0" May 17 00:30:19.910529 containerd[1808]: 2025-05-17 00:30:19.873 [INFO][4946] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6f161cc2b34debc8757958a2fb67cbc9a17504e8baa1f12ed880a33ae6433ee1" HandleID="k8s-pod-network.6f161cc2b34debc8757958a2fb67cbc9a17504e8baa1f12ed880a33ae6433ee1" Workload="ci--4081.3.3--n--65a4af4639-k8s-whisker--66d7ff6d95--qgk6n-eth0" May 17 00:30:19.910529 containerd[1808]: 2025-05-17 00:30:19.873 [INFO][4946] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6f161cc2b34debc8757958a2fb67cbc9a17504e8baa1f12ed880a33ae6433ee1" HandleID="k8s-pod-network.6f161cc2b34debc8757958a2fb67cbc9a17504e8baa1f12ed880a33ae6433ee1" Workload="ci--4081.3.3--n--65a4af4639-k8s-whisker--66d7ff6d95--qgk6n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f6f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-n-65a4af4639", "pod":"whisker-66d7ff6d95-qgk6n", "timestamp":"2025-05-17 00:30:19.87341903 +0000 UTC"}, Hostname:"ci-4081.3.3-n-65a4af4639", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:30:19.910529 containerd[1808]: 2025-05-17 00:30:19.873 [INFO][4946] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:19.910529 containerd[1808]: 2025-05-17 00:30:19.873 [INFO][4946] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:19.910529 containerd[1808]: 2025-05-17 00:30:19.873 [INFO][4946] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-65a4af4639' May 17 00:30:19.910529 containerd[1808]: 2025-05-17 00:30:19.878 [INFO][4946] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6f161cc2b34debc8757958a2fb67cbc9a17504e8baa1f12ed880a33ae6433ee1" host="ci-4081.3.3-n-65a4af4639" May 17 00:30:19.910529 containerd[1808]: 2025-05-17 00:30:19.881 [INFO][4946] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-65a4af4639" May 17 00:30:19.910529 containerd[1808]: 2025-05-17 00:30:19.885 [INFO][4946] ipam/ipam.go 511: Trying affinity for 192.168.48.128/26 host="ci-4081.3.3-n-65a4af4639" May 17 00:30:19.910529 containerd[1808]: 2025-05-17 00:30:19.886 [INFO][4946] ipam/ipam.go 158: Attempting to load block cidr=192.168.48.128/26 host="ci-4081.3.3-n-65a4af4639" May 17 00:30:19.910529 containerd[1808]: 2025-05-17 00:30:19.888 [INFO][4946] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.48.128/26 host="ci-4081.3.3-n-65a4af4639" May 17 00:30:19.910529 containerd[1808]: 2025-05-17 00:30:19.888 [INFO][4946] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.48.128/26 handle="k8s-pod-network.6f161cc2b34debc8757958a2fb67cbc9a17504e8baa1f12ed880a33ae6433ee1" host="ci-4081.3.3-n-65a4af4639" May 17 00:30:19.910529 containerd[1808]: 2025-05-17 00:30:19.889 [INFO][4946] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6f161cc2b34debc8757958a2fb67cbc9a17504e8baa1f12ed880a33ae6433ee1 May 17 00:30:19.910529 containerd[1808]: 2025-05-17 00:30:19.892 [INFO][4946] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.48.128/26 handle="k8s-pod-network.6f161cc2b34debc8757958a2fb67cbc9a17504e8baa1f12ed880a33ae6433ee1" host="ci-4081.3.3-n-65a4af4639" May 17 00:30:19.910529 containerd[1808]: 2025-05-17 00:30:19.895 [INFO][4946] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.48.129/26] block=192.168.48.128/26 handle="k8s-pod-network.6f161cc2b34debc8757958a2fb67cbc9a17504e8baa1f12ed880a33ae6433ee1" host="ci-4081.3.3-n-65a4af4639" May 17 00:30:19.910529 containerd[1808]: 2025-05-17 00:30:19.895 [INFO][4946] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.48.129/26] handle="k8s-pod-network.6f161cc2b34debc8757958a2fb67cbc9a17504e8baa1f12ed880a33ae6433ee1" host="ci-4081.3.3-n-65a4af4639" May 17 00:30:19.910529 containerd[1808]: 2025-05-17 00:30:19.895 [INFO][4946] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:19.910529 containerd[1808]: 2025-05-17 00:30:19.895 [INFO][4946] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.48.129/26] IPv6=[] ContainerID="6f161cc2b34debc8757958a2fb67cbc9a17504e8baa1f12ed880a33ae6433ee1" HandleID="k8s-pod-network.6f161cc2b34debc8757958a2fb67cbc9a17504e8baa1f12ed880a33ae6433ee1" Workload="ci--4081.3.3--n--65a4af4639-k8s-whisker--66d7ff6d95--qgk6n-eth0" May 17 00:30:19.911279 containerd[1808]: 2025-05-17 00:30:19.896 [INFO][4923] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6f161cc2b34debc8757958a2fb67cbc9a17504e8baa1f12ed880a33ae6433ee1" Namespace="calico-system" Pod="whisker-66d7ff6d95-qgk6n" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-whisker--66d7ff6d95--qgk6n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--65a4af4639-k8s-whisker--66d7ff6d95--qgk6n-eth0", GenerateName:"whisker-66d7ff6d95-", Namespace:"calico-system", SelfLink:"", UID:"de7e5665-5768-4a36-925b-d749b053cd37", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66d7ff6d95", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-65a4af4639", ContainerID:"", Pod:"whisker-66d7ff6d95-qgk6n", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.48.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliba16bcd8fa1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:30:19.911279 containerd[1808]: 2025-05-17 00:30:19.896 [INFO][4923] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.48.129/32] ContainerID="6f161cc2b34debc8757958a2fb67cbc9a17504e8baa1f12ed880a33ae6433ee1" Namespace="calico-system" Pod="whisker-66d7ff6d95-qgk6n" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-whisker--66d7ff6d95--qgk6n-eth0" May 17 00:30:19.911279 containerd[1808]: 2025-05-17 00:30:19.896 [INFO][4923] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliba16bcd8fa1 ContainerID="6f161cc2b34debc8757958a2fb67cbc9a17504e8baa1f12ed880a33ae6433ee1" Namespace="calico-system" Pod="whisker-66d7ff6d95-qgk6n" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-whisker--66d7ff6d95--qgk6n-eth0" May 17 00:30:19.911279 containerd[1808]: 2025-05-17 00:30:19.902 [INFO][4923] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6f161cc2b34debc8757958a2fb67cbc9a17504e8baa1f12ed880a33ae6433ee1" Namespace="calico-system" Pod="whisker-66d7ff6d95-qgk6n" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-whisker--66d7ff6d95--qgk6n-eth0" May 17 00:30:19.911279 containerd[1808]: 2025-05-17 00:30:19.902 [INFO][4923] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6f161cc2b34debc8757958a2fb67cbc9a17504e8baa1f12ed880a33ae6433ee1" Namespace="calico-system" Pod="whisker-66d7ff6d95-qgk6n" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-whisker--66d7ff6d95--qgk6n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--65a4af4639-k8s-whisker--66d7ff6d95--qgk6n-eth0", GenerateName:"whisker-66d7ff6d95-", Namespace:"calico-system", SelfLink:"", UID:"de7e5665-5768-4a36-925b-d749b053cd37", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66d7ff6d95", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-65a4af4639", ContainerID:"6f161cc2b34debc8757958a2fb67cbc9a17504e8baa1f12ed880a33ae6433ee1", Pod:"whisker-66d7ff6d95-qgk6n", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.48.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliba16bcd8fa1", MAC:"1a:f5:f9:4b:c0:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:30:19.911279 containerd[1808]: 2025-05-17 00:30:19.909 [INFO][4923] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6f161cc2b34debc8757958a2fb67cbc9a17504e8baa1f12ed880a33ae6433ee1" Namespace="calico-system" Pod="whisker-66d7ff6d95-qgk6n" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-whisker--66d7ff6d95--qgk6n-eth0" May 17 00:30:19.920058 containerd[1808]: time="2025-05-17T00:30:19.919988902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:30:19.920058 containerd[1808]: time="2025-05-17T00:30:19.920022679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:30:19.920058 containerd[1808]: time="2025-05-17T00:30:19.920030249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:19.920166 containerd[1808]: time="2025-05-17T00:30:19.920069118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:19.939643 systemd[1]: Started cri-containerd-6f161cc2b34debc8757958a2fb67cbc9a17504e8baa1f12ed880a33ae6433ee1.scope - libcontainer container 6f161cc2b34debc8757958a2fb67cbc9a17504e8baa1f12ed880a33ae6433ee1. May 17 00:30:19.980691 containerd[1808]: time="2025-05-17T00:30:19.980654898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66d7ff6d95-qgk6n,Uid:de7e5665-5768-4a36-925b-d749b053cd37,Namespace:calico-system,Attempt:0,} returns sandbox id \"6f161cc2b34debc8757958a2fb67cbc9a17504e8baa1f12ed880a33ae6433ee1\"" May 17 00:30:19.981733 containerd[1808]: time="2025-05-17T00:30:19.981709962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:30:20.311908 containerd[1808]: time="2025-05-17T00:30:20.311849039Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:30:20.312351 containerd[1808]: time="2025-05-17T00:30:20.312293417Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:30:20.312506 containerd[1808]: time="2025-05-17T00:30:20.312405538Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:30:20.312579 kubelet[3069]: E0517 00:30:20.312533 3069 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:30:20.312579 kubelet[3069]: E0517 00:30:20.312562 3069 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:30:20.312656 kubelet[3069]: E0517 00:30:20.312635 3069 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e162bdd4080846acbc669a202524e139,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hzcj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66d7ff6d95-qgk6n_calico-system(de7e5665-5768-4a36-925b-d749b053cd37): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:30:20.314295 containerd[1808]: time="2025-05-17T00:30:20.314266535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:30:20.617099 containerd[1808]: time="2025-05-17T00:30:20.617019693Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:30:20.617445 containerd[1808]: time="2025-05-17T00:30:20.617421340Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:30:20.617498 containerd[1808]: time="2025-05-17T00:30:20.617436567Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:30:20.617644 kubelet[3069]: E0517 00:30:20.617622 3069 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:30:20.617712 kubelet[3069]: E0517 00:30:20.617659 3069 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:30:20.617797 kubelet[3069]: E0517 00:30:20.617771 3069 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hzcj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66d7ff6d95-qgk6n_calico-system(de7e5665-5768-4a36-925b-d749b053cd37): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:30:20.619071 kubelet[3069]: E0517 00:30:20.619038 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-66d7ff6d95-qgk6n" podUID="de7e5665-5768-4a36-925b-d749b053cd37" May 17 00:30:21.400108 kubelet[3069]: I0517 00:30:21.400000 3069 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4337522a-e2a1-45a2-9102-0c0650fe6aee" path="/var/lib/kubelet/pods/4337522a-e2a1-45a2-9102-0c0650fe6aee/volumes" May 17 00:30:21.508906 kubelet[3069]: E0517 00:30:21.508829 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-66d7ff6d95-qgk6n" podUID="de7e5665-5768-4a36-925b-d749b053cd37" May 17 00:30:21.514692 systemd-networkd[1605]: caliba16bcd8fa1: Gained IPv6LL May 17 00:30:23.394050 containerd[1808]: time="2025-05-17T00:30:23.393962227Z" level=info msg="StopPodSandbox for \"746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74\"" May 17 00:30:23.394888 containerd[1808]: time="2025-05-17T00:30:23.394244904Z" level=info msg="StopPodSandbox for \"3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae\"" May 17 00:30:23.436985 containerd[1808]: 2025-05-17 00:30:23.420 [INFO][5167] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74" May 17 00:30:23.436985 containerd[1808]: 2025-05-17 00:30:23.420 [INFO][5167] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74" iface="eth0" netns="/var/run/netns/cni-5ac0ee3f-29f4-3cc5-cc29-236ae16b72a2" May 17 00:30:23.436985 containerd[1808]: 2025-05-17 00:30:23.420 [INFO][5167] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74" iface="eth0" netns="/var/run/netns/cni-5ac0ee3f-29f4-3cc5-cc29-236ae16b72a2" May 17 00:30:23.436985 containerd[1808]: 2025-05-17 00:30:23.420 [INFO][5167] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74" iface="eth0" netns="/var/run/netns/cni-5ac0ee3f-29f4-3cc5-cc29-236ae16b72a2" May 17 00:30:23.436985 containerd[1808]: 2025-05-17 00:30:23.421 [INFO][5167] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74" May 17 00:30:23.436985 containerd[1808]: 2025-05-17 00:30:23.421 [INFO][5167] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74" May 17 00:30:23.436985 containerd[1808]: 2025-05-17 00:30:23.431 [INFO][5199] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74" HandleID="k8s-pod-network.746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74" Workload="ci--4081.3.3--n--65a4af4639-k8s-csi--node--driver--zdwc8-eth0" May 17 00:30:23.436985 containerd[1808]: 2025-05-17 00:30:23.431 [INFO][5199] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:23.436985 containerd[1808]: 2025-05-17 00:30:23.431 [INFO][5199] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:23.436985 containerd[1808]: 2025-05-17 00:30:23.434 [WARNING][5199] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74" HandleID="k8s-pod-network.746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74" Workload="ci--4081.3.3--n--65a4af4639-k8s-csi--node--driver--zdwc8-eth0" May 17 00:30:23.436985 containerd[1808]: 2025-05-17 00:30:23.434 [INFO][5199] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74" HandleID="k8s-pod-network.746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74" Workload="ci--4081.3.3--n--65a4af4639-k8s-csi--node--driver--zdwc8-eth0" May 17 00:30:23.436985 containerd[1808]: 2025-05-17 00:30:23.435 [INFO][5199] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:23.436985 containerd[1808]: 2025-05-17 00:30:23.436 [INFO][5167] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74" May 17 00:30:23.437482 containerd[1808]: time="2025-05-17T00:30:23.437061258Z" level=info msg="TearDown network for sandbox \"746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74\" successfully" May 17 00:30:23.437482 containerd[1808]: time="2025-05-17T00:30:23.437079989Z" level=info msg="StopPodSandbox for \"746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74\" returns successfully" May 17 00:30:23.437482 containerd[1808]: time="2025-05-17T00:30:23.437442621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zdwc8,Uid:8e114f42-fc56-4b66-8d81-a37f65ab357c,Namespace:calico-system,Attempt:1,}" May 17 00:30:23.438682 systemd[1]: run-netns-cni\x2d5ac0ee3f\x2d29f4\x2d3cc5\x2dcc29\x2d236ae16b72a2.mount: Deactivated successfully. May 17 00:30:23.441737 containerd[1808]: 2025-05-17 00:30:23.420 [INFO][5166] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae" May 17 00:30:23.441737 containerd[1808]: 2025-05-17 00:30:23.420 [INFO][5166] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae" iface="eth0" netns="/var/run/netns/cni-9253a2e4-da9b-7f6f-e7f7-063931550e87" May 17 00:30:23.441737 containerd[1808]: 2025-05-17 00:30:23.420 [INFO][5166] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae" iface="eth0" netns="/var/run/netns/cni-9253a2e4-da9b-7f6f-e7f7-063931550e87" May 17 00:30:23.441737 containerd[1808]: 2025-05-17 00:30:23.420 [INFO][5166] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae" iface="eth0" netns="/var/run/netns/cni-9253a2e4-da9b-7f6f-e7f7-063931550e87" May 17 00:30:23.441737 containerd[1808]: 2025-05-17 00:30:23.420 [INFO][5166] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae" May 17 00:30:23.441737 containerd[1808]: 2025-05-17 00:30:23.420 [INFO][5166] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae" May 17 00:30:23.441737 containerd[1808]: 2025-05-17 00:30:23.431 [INFO][5197] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae" HandleID="k8s-pod-network.3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae" Workload="ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--kbg69-eth0" May 17 00:30:23.441737 containerd[1808]: 2025-05-17 00:30:23.431 [INFO][5197] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:23.441737 containerd[1808]: 2025-05-17 00:30:23.435 [INFO][5197] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:23.441737 containerd[1808]: 2025-05-17 00:30:23.438 [WARNING][5197] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae" HandleID="k8s-pod-network.3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae" Workload="ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--kbg69-eth0" May 17 00:30:23.441737 containerd[1808]: 2025-05-17 00:30:23.439 [INFO][5197] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae" HandleID="k8s-pod-network.3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae" Workload="ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--kbg69-eth0" May 17 00:30:23.441737 containerd[1808]: 2025-05-17 00:30:23.440 [INFO][5197] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:23.441737 containerd[1808]: 2025-05-17 00:30:23.440 [INFO][5166] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae" May 17 00:30:23.442009 containerd[1808]: time="2025-05-17T00:30:23.441765987Z" level=info msg="TearDown network for sandbox \"3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae\" successfully" May 17 00:30:23.442009 containerd[1808]: time="2025-05-17T00:30:23.441779260Z" level=info msg="StopPodSandbox for \"3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae\" returns successfully" May 17 00:30:23.442140 containerd[1808]: time="2025-05-17T00:30:23.442125791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kbg69,Uid:83bc9904-f910-4661-b26c-3bab2e3ff098,Namespace:kube-system,Attempt:1,}" May 17 00:30:23.445118 systemd[1]: run-netns-cni\x2d9253a2e4\x2dda9b\x2d7f6f\x2de7f7\x2d063931550e87.mount: Deactivated successfully. May 17 00:30:23.507414 systemd-networkd[1605]: califef44539932: Link UP May 17 00:30:23.507574 systemd-networkd[1605]: califef44539932: Gained carrier May 17 00:30:23.513636 containerd[1808]: 2025-05-17 00:30:23.452 [INFO][5231] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:30:23.513636 containerd[1808]: 2025-05-17 00:30:23.458 [INFO][5231] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--65a4af4639-k8s-csi--node--driver--zdwc8-eth0 csi-node-driver- calico-system 8e114f42-fc56-4b66-8d81-a37f65ab357c 891 0 2025-05-17 00:30:01 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:68bf44dd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.3-n-65a4af4639 csi-node-driver-zdwc8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] califef44539932 [] [] }} ContainerID="0c3fb46820362ecb9d7d856d6a392bf87a99903b6e236dbbb08a2b073174265f" Namespace="calico-system" Pod="csi-node-driver-zdwc8" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-csi--node--driver--zdwc8-" May 17 00:30:23.513636 containerd[1808]: 2025-05-17 00:30:23.458 [INFO][5231] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0c3fb46820362ecb9d7d856d6a392bf87a99903b6e236dbbb08a2b073174265f" Namespace="calico-system" Pod="csi-node-driver-zdwc8" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-csi--node--driver--zdwc8-eth0" May 17 00:30:23.513636 containerd[1808]: 2025-05-17 00:30:23.470 [INFO][5275] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0c3fb46820362ecb9d7d856d6a392bf87a99903b6e236dbbb08a2b073174265f" HandleID="k8s-pod-network.0c3fb46820362ecb9d7d856d6a392bf87a99903b6e236dbbb08a2b073174265f" Workload="ci--4081.3.3--n--65a4af4639-k8s-csi--node--driver--zdwc8-eth0" May 17 00:30:23.513636 containerd[1808]: 2025-05-17 00:30:23.470 [INFO][5275] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0c3fb46820362ecb9d7d856d6a392bf87a99903b6e236dbbb08a2b073174265f" HandleID="k8s-pod-network.0c3fb46820362ecb9d7d856d6a392bf87a99903b6e236dbbb08a2b073174265f" Workload="ci--4081.3.3--n--65a4af4639-k8s-csi--node--driver--zdwc8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000255880), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-n-65a4af4639", "pod":"csi-node-driver-zdwc8", "timestamp":"2025-05-17 00:30:23.47064155 +0000 UTC"}, Hostname:"ci-4081.3.3-n-65a4af4639", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:30:23.513636 containerd[1808]: 2025-05-17 00:30:23.470 [INFO][5275] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:23.513636 containerd[1808]: 2025-05-17 00:30:23.470 [INFO][5275] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:23.513636 containerd[1808]: 2025-05-17 00:30:23.470 [INFO][5275] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-65a4af4639' May 17 00:30:23.513636 containerd[1808]: 2025-05-17 00:30:23.475 [INFO][5275] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0c3fb46820362ecb9d7d856d6a392bf87a99903b6e236dbbb08a2b073174265f" host="ci-4081.3.3-n-65a4af4639" May 17 00:30:23.513636 containerd[1808]: 2025-05-17 00:30:23.477 [INFO][5275] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-65a4af4639" May 17 00:30:23.513636 containerd[1808]: 2025-05-17 00:30:23.479 [INFO][5275] ipam/ipam.go 511: Trying affinity for 192.168.48.128/26 host="ci-4081.3.3-n-65a4af4639" May 17 00:30:23.513636 containerd[1808]: 2025-05-17 00:30:23.480 [INFO][5275] ipam/ipam.go 158: Attempting to load block cidr=192.168.48.128/26 host="ci-4081.3.3-n-65a4af4639" May 17 00:30:23.513636 containerd[1808]: 2025-05-17 00:30:23.481 [INFO][5275] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.48.128/26 host="ci-4081.3.3-n-65a4af4639" May 17 00:30:23.513636 containerd[1808]: 2025-05-17 00:30:23.481 [INFO][5275] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.48.128/26 handle="k8s-pod-network.0c3fb46820362ecb9d7d856d6a392bf87a99903b6e236dbbb08a2b073174265f" host="ci-4081.3.3-n-65a4af4639" May 17 00:30:23.513636 containerd[1808]: 2025-05-17 00:30:23.482 [INFO][5275] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0c3fb46820362ecb9d7d856d6a392bf87a99903b6e236dbbb08a2b073174265f May 17 00:30:23.513636 containerd[1808]: 2025-05-17 00:30:23.502 [INFO][5275] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.48.128/26 handle="k8s-pod-network.0c3fb46820362ecb9d7d856d6a392bf87a99903b6e236dbbb08a2b073174265f" host="ci-4081.3.3-n-65a4af4639" May 17 00:30:23.513636 containerd[1808]: 2025-05-17 00:30:23.505 [INFO][5275] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.48.130/26] block=192.168.48.128/26 handle="k8s-pod-network.0c3fb46820362ecb9d7d856d6a392bf87a99903b6e236dbbb08a2b073174265f" host="ci-4081.3.3-n-65a4af4639" May 17 00:30:23.513636 containerd[1808]: 2025-05-17 00:30:23.505 [INFO][5275] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.48.130/26] handle="k8s-pod-network.0c3fb46820362ecb9d7d856d6a392bf87a99903b6e236dbbb08a2b073174265f" host="ci-4081.3.3-n-65a4af4639" May 17 00:30:23.513636 containerd[1808]: 2025-05-17 00:30:23.505 [INFO][5275] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:23.513636 containerd[1808]: 2025-05-17 00:30:23.505 [INFO][5275] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.48.130/26] IPv6=[] ContainerID="0c3fb46820362ecb9d7d856d6a392bf87a99903b6e236dbbb08a2b073174265f" HandleID="k8s-pod-network.0c3fb46820362ecb9d7d856d6a392bf87a99903b6e236dbbb08a2b073174265f" Workload="ci--4081.3.3--n--65a4af4639-k8s-csi--node--driver--zdwc8-eth0" May 17 00:30:23.514153 containerd[1808]: 2025-05-17 00:30:23.506 [INFO][5231] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0c3fb46820362ecb9d7d856d6a392bf87a99903b6e236dbbb08a2b073174265f" Namespace="calico-system" Pod="csi-node-driver-zdwc8" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-csi--node--driver--zdwc8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--65a4af4639-k8s-csi--node--driver--zdwc8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8e114f42-fc56-4b66-8d81-a37f65ab357c", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-65a4af4639", ContainerID:"", Pod:"csi-node-driver-zdwc8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.48.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califef44539932", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:30:23.514153 containerd[1808]: 2025-05-17 00:30:23.506 [INFO][5231] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.48.130/32] ContainerID="0c3fb46820362ecb9d7d856d6a392bf87a99903b6e236dbbb08a2b073174265f" Namespace="calico-system" Pod="csi-node-driver-zdwc8" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-csi--node--driver--zdwc8-eth0" May 17 00:30:23.514153 containerd[1808]: 2025-05-17 00:30:23.506 [INFO][5231] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califef44539932 ContainerID="0c3fb46820362ecb9d7d856d6a392bf87a99903b6e236dbbb08a2b073174265f" Namespace="calico-system" Pod="csi-node-driver-zdwc8" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-csi--node--driver--zdwc8-eth0" May 17 00:30:23.514153 containerd[1808]: 2025-05-17 00:30:23.507 [INFO][5231] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0c3fb46820362ecb9d7d856d6a392bf87a99903b6e236dbbb08a2b073174265f" Namespace="calico-system" Pod="csi-node-driver-zdwc8" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-csi--node--driver--zdwc8-eth0" May 17 00:30:23.514153 containerd[1808]: 2025-05-17 00:30:23.507 [INFO][5231] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0c3fb46820362ecb9d7d856d6a392bf87a99903b6e236dbbb08a2b073174265f" Namespace="calico-system" Pod="csi-node-driver-zdwc8" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-csi--node--driver--zdwc8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--65a4af4639-k8s-csi--node--driver--zdwc8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8e114f42-fc56-4b66-8d81-a37f65ab357c", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-65a4af4639", ContainerID:"0c3fb46820362ecb9d7d856d6a392bf87a99903b6e236dbbb08a2b073174265f", Pod:"csi-node-driver-zdwc8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.48.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califef44539932", MAC:"d2:6f:e4:9a:d5:c1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:30:23.514153 containerd[1808]: 2025-05-17 00:30:23.512 [INFO][5231] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0c3fb46820362ecb9d7d856d6a392bf87a99903b6e236dbbb08a2b073174265f" Namespace="calico-system" Pod="csi-node-driver-zdwc8" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-csi--node--driver--zdwc8-eth0" May 17 00:30:23.521549 containerd[1808]: time="2025-05-17T00:30:23.521475389Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:30:23.521704 containerd[1808]: time="2025-05-17T00:30:23.521687526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:30:23.521704 containerd[1808]: time="2025-05-17T00:30:23.521698579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:23.521757 containerd[1808]: time="2025-05-17T00:30:23.521743074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:23.534528 systemd[1]: Started cri-containerd-0c3fb46820362ecb9d7d856d6a392bf87a99903b6e236dbbb08a2b073174265f.scope - libcontainer container 0c3fb46820362ecb9d7d856d6a392bf87a99903b6e236dbbb08a2b073174265f. May 17 00:30:23.544522 containerd[1808]: time="2025-05-17T00:30:23.544500550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zdwc8,Uid:8e114f42-fc56-4b66-8d81-a37f65ab357c,Namespace:calico-system,Attempt:1,} returns sandbox id \"0c3fb46820362ecb9d7d856d6a392bf87a99903b6e236dbbb08a2b073174265f\"" May 17 00:30:23.545173 containerd[1808]: time="2025-05-17T00:30:23.545163157Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 17 00:30:23.624019 systemd-networkd[1605]: cali23226e6534b: Link UP May 17 00:30:23.624170 systemd-networkd[1605]: cali23226e6534b: Gained carrier May 17 00:30:23.631194 containerd[1808]: 2025-05-17 00:30:23.456 [INFO][5245] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:30:23.631194 containerd[1808]: 2025-05-17 00:30:23.461 [INFO][5245] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--kbg69-eth0 coredns-7c65d6cfc9- kube-system 83bc9904-f910-4661-b26c-3bab2e3ff098 890 0 2025-05-17 00:29:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.3-n-65a4af4639 coredns-7c65d6cfc9-kbg69 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali23226e6534b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="51ec4b3e7329320d9797e2b6536387196f96bc23a93a436b2bf11fa7e9a81c53" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kbg69" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--kbg69-" May 17 00:30:23.631194 containerd[1808]: 2025-05-17 00:30:23.461 [INFO][5245] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="51ec4b3e7329320d9797e2b6536387196f96bc23a93a436b2bf11fa7e9a81c53" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kbg69" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--kbg69-eth0" May 17 00:30:23.631194 containerd[1808]: 2025-05-17 00:30:23.473 [INFO][5284] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="51ec4b3e7329320d9797e2b6536387196f96bc23a93a436b2bf11fa7e9a81c53" HandleID="k8s-pod-network.51ec4b3e7329320d9797e2b6536387196f96bc23a93a436b2bf11fa7e9a81c53" Workload="ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--kbg69-eth0" May 17 00:30:23.631194 containerd[1808]: 2025-05-17 00:30:23.473 [INFO][5284] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="51ec4b3e7329320d9797e2b6536387196f96bc23a93a436b2bf11fa7e9a81c53" HandleID="k8s-pod-network.51ec4b3e7329320d9797e2b6536387196f96bc23a93a436b2bf11fa7e9a81c53" Workload="ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--kbg69-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00044e270), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.3-n-65a4af4639", "pod":"coredns-7c65d6cfc9-kbg69", "timestamp":"2025-05-17 00:30:23.473101707 +0000 UTC"}, Hostname:"ci-4081.3.3-n-65a4af4639", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:30:23.631194 containerd[1808]: 2025-05-17 00:30:23.473 [INFO][5284] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:23.631194 containerd[1808]: 2025-05-17 00:30:23.505 [INFO][5284] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:23.631194 containerd[1808]: 2025-05-17 00:30:23.505 [INFO][5284] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-65a4af4639' May 17 00:30:23.631194 containerd[1808]: 2025-05-17 00:30:23.579 [INFO][5284] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.51ec4b3e7329320d9797e2b6536387196f96bc23a93a436b2bf11fa7e9a81c53" host="ci-4081.3.3-n-65a4af4639" May 17 00:30:23.631194 containerd[1808]: 2025-05-17 00:30:23.588 [INFO][5284] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-65a4af4639" May 17 00:30:23.631194 containerd[1808]: 2025-05-17 00:30:23.596 [INFO][5284] ipam/ipam.go 511: Trying affinity for 192.168.48.128/26 host="ci-4081.3.3-n-65a4af4639" May 17 00:30:23.631194 containerd[1808]: 2025-05-17 00:30:23.600 [INFO][5284] ipam/ipam.go 158: Attempting to load block cidr=192.168.48.128/26 host="ci-4081.3.3-n-65a4af4639" May 17 00:30:23.631194 containerd[1808]: 2025-05-17 00:30:23.605 [INFO][5284] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.48.128/26 host="ci-4081.3.3-n-65a4af4639" May 17 00:30:23.631194 containerd[1808]: 2025-05-17 00:30:23.605 [INFO][5284] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.48.128/26 handle="k8s-pod-network.51ec4b3e7329320d9797e2b6536387196f96bc23a93a436b2bf11fa7e9a81c53" host="ci-4081.3.3-n-65a4af4639" May 17 00:30:23.631194 containerd[1808]: 2025-05-17 00:30:23.608 [INFO][5284] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.51ec4b3e7329320d9797e2b6536387196f96bc23a93a436b2bf11fa7e9a81c53 May 17 00:30:23.631194 containerd[1808]: 2025-05-17 00:30:23.615 [INFO][5284] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.48.128/26 handle="k8s-pod-network.51ec4b3e7329320d9797e2b6536387196f96bc23a93a436b2bf11fa7e9a81c53" host="ci-4081.3.3-n-65a4af4639" May 17 00:30:23.631194 containerd[1808]: 2025-05-17 00:30:23.621 [INFO][5284] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.48.131/26] block=192.168.48.128/26 handle="k8s-pod-network.51ec4b3e7329320d9797e2b6536387196f96bc23a93a436b2bf11fa7e9a81c53" host="ci-4081.3.3-n-65a4af4639" May 17 00:30:23.631194 containerd[1808]: 2025-05-17 00:30:23.621 [INFO][5284] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.48.131/26] handle="k8s-pod-network.51ec4b3e7329320d9797e2b6536387196f96bc23a93a436b2bf11fa7e9a81c53" host="ci-4081.3.3-n-65a4af4639" May 17 00:30:23.631194 containerd[1808]: 2025-05-17 00:30:23.621 [INFO][5284] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:23.631194 containerd[1808]: 2025-05-17 00:30:23.621 [INFO][5284] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.48.131/26] IPv6=[] ContainerID="51ec4b3e7329320d9797e2b6536387196f96bc23a93a436b2bf11fa7e9a81c53" HandleID="k8s-pod-network.51ec4b3e7329320d9797e2b6536387196f96bc23a93a436b2bf11fa7e9a81c53" Workload="ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--kbg69-eth0" May 17 00:30:23.631758 containerd[1808]: 2025-05-17 00:30:23.622 [INFO][5245] cni-plugin/k8s.go 418: Populated endpoint ContainerID="51ec4b3e7329320d9797e2b6536387196f96bc23a93a436b2bf11fa7e9a81c53" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kbg69" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--kbg69-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--kbg69-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"83bc9904-f910-4661-b26c-3bab2e3ff098", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 29, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-65a4af4639", ContainerID:"", Pod:"coredns-7c65d6cfc9-kbg69", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.48.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali23226e6534b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:30:23.631758 containerd[1808]: 2025-05-17 00:30:23.623 [INFO][5245] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.48.131/32] ContainerID="51ec4b3e7329320d9797e2b6536387196f96bc23a93a436b2bf11fa7e9a81c53" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kbg69" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--kbg69-eth0" May 17 00:30:23.631758 containerd[1808]: 2025-05-17 00:30:23.623 [INFO][5245] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali23226e6534b ContainerID="51ec4b3e7329320d9797e2b6536387196f96bc23a93a436b2bf11fa7e9a81c53" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kbg69" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--kbg69-eth0" May 17 00:30:23.631758 containerd[1808]: 2025-05-17 00:30:23.624 [INFO][5245] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="51ec4b3e7329320d9797e2b6536387196f96bc23a93a436b2bf11fa7e9a81c53" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kbg69" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--kbg69-eth0" May 17 00:30:23.631758 containerd[1808]: 2025-05-17 00:30:23.624 [INFO][5245] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="51ec4b3e7329320d9797e2b6536387196f96bc23a93a436b2bf11fa7e9a81c53" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kbg69" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--kbg69-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--kbg69-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"83bc9904-f910-4661-b26c-3bab2e3ff098", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 29, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-65a4af4639", ContainerID:"51ec4b3e7329320d9797e2b6536387196f96bc23a93a436b2bf11fa7e9a81c53", Pod:"coredns-7c65d6cfc9-kbg69", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.48.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali23226e6534b", MAC:"b2:ee:83:9f:ef:ae", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:30:23.631758 containerd[1808]: 2025-05-17 00:30:23.630 [INFO][5245] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="51ec4b3e7329320d9797e2b6536387196f96bc23a93a436b2bf11fa7e9a81c53" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kbg69" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--kbg69-eth0" May 17 00:30:23.640398 containerd[1808]: time="2025-05-17T00:30:23.640350317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:30:23.640398 containerd[1808]: time="2025-05-17T00:30:23.640383396Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:30:23.640398 containerd[1808]: time="2025-05-17T00:30:23.640390729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:23.640525 containerd[1808]: time="2025-05-17T00:30:23.640440669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:23.660659 systemd[1]: Started cri-containerd-51ec4b3e7329320d9797e2b6536387196f96bc23a93a436b2bf11fa7e9a81c53.scope - libcontainer container 51ec4b3e7329320d9797e2b6536387196f96bc23a93a436b2bf11fa7e9a81c53. May 17 00:30:23.694807 containerd[1808]: time="2025-05-17T00:30:23.694746253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kbg69,Uid:83bc9904-f910-4661-b26c-3bab2e3ff098,Namespace:kube-system,Attempt:1,} returns sandbox id \"51ec4b3e7329320d9797e2b6536387196f96bc23a93a436b2bf11fa7e9a81c53\"" May 17 00:30:23.696486 containerd[1808]: time="2025-05-17T00:30:23.696439627Z" level=info msg="CreateContainer within sandbox \"51ec4b3e7329320d9797e2b6536387196f96bc23a93a436b2bf11fa7e9a81c53\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:30:23.701360 containerd[1808]: time="2025-05-17T00:30:23.701343523Z" level=info msg="CreateContainer within sandbox \"51ec4b3e7329320d9797e2b6536387196f96bc23a93a436b2bf11fa7e9a81c53\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"727113855790c88a5940c563a9d852d60e9e92351a1187c2921c4f0c0ff13582\"" May 17 00:30:23.701589 containerd[1808]: time="2025-05-17T00:30:23.701537248Z" level=info msg="StartContainer for \"727113855790c88a5940c563a9d852d60e9e92351a1187c2921c4f0c0ff13582\"" May 17 00:30:23.727513 systemd[1]: Started cri-containerd-727113855790c88a5940c563a9d852d60e9e92351a1187c2921c4f0c0ff13582.scope - libcontainer container 727113855790c88a5940c563a9d852d60e9e92351a1187c2921c4f0c0ff13582. May 17 00:30:23.741814 containerd[1808]: time="2025-05-17T00:30:23.741792344Z" level=info msg="StartContainer for \"727113855790c88a5940c563a9d852d60e9e92351a1187c2921c4f0c0ff13582\" returns successfully" May 17 00:30:24.268580 kubelet[3069]: I0517 00:30:24.268457 3069 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:30:24.392851 containerd[1808]: time="2025-05-17T00:30:24.392826169Z" level=info msg="StopPodSandbox for \"90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb\"" May 17 00:30:24.456371 containerd[1808]: 2025-05-17 00:30:24.434 [INFO][5521] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb" May 17 00:30:24.456371 containerd[1808]: 2025-05-17 00:30:24.434 [INFO][5521] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb" iface="eth0" netns="/var/run/netns/cni-d6f1ce7d-f4bb-22c1-318e-c468ee081cf8" May 17 00:30:24.456371 containerd[1808]: 2025-05-17 00:30:24.435 [INFO][5521] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb" iface="eth0" netns="/var/run/netns/cni-d6f1ce7d-f4bb-22c1-318e-c468ee081cf8" May 17 00:30:24.456371 containerd[1808]: 2025-05-17 00:30:24.435 [INFO][5521] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb" iface="eth0" netns="/var/run/netns/cni-d6f1ce7d-f4bb-22c1-318e-c468ee081cf8" May 17 00:30:24.456371 containerd[1808]: 2025-05-17 00:30:24.436 [INFO][5521] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb" May 17 00:30:24.456371 containerd[1808]: 2025-05-17 00:30:24.436 [INFO][5521] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb" May 17 00:30:24.456371 containerd[1808]: 2025-05-17 00:30:24.450 [INFO][5535] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb" HandleID="k8s-pod-network.90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb" Workload="ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--z8c5c-eth0" May 17 00:30:24.456371 containerd[1808]: 2025-05-17 00:30:24.450 [INFO][5535] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:24.456371 containerd[1808]: 2025-05-17 00:30:24.450 [INFO][5535] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:24.456371 containerd[1808]: 2025-05-17 00:30:24.454 [WARNING][5535] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb" HandleID="k8s-pod-network.90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb" Workload="ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--z8c5c-eth0" May 17 00:30:24.456371 containerd[1808]: 2025-05-17 00:30:24.454 [INFO][5535] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb" HandleID="k8s-pod-network.90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb" Workload="ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--z8c5c-eth0" May 17 00:30:24.456371 containerd[1808]: 2025-05-17 00:30:24.454 [INFO][5535] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:24.456371 containerd[1808]: 2025-05-17 00:30:24.455 [INFO][5521] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb" May 17 00:30:24.456776 containerd[1808]: time="2025-05-17T00:30:24.456456625Z" level=info msg="TearDown network for sandbox \"90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb\" successfully" May 17 00:30:24.456776 containerd[1808]: time="2025-05-17T00:30:24.456474346Z" level=info msg="StopPodSandbox for \"90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb\" returns successfully" May 17 00:30:24.456911 containerd[1808]: time="2025-05-17T00:30:24.456866887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-688cf69547-z8c5c,Uid:47a8b75a-0ae5-4d27-9954-209696bc0aa7,Namespace:calico-apiserver,Attempt:1,}" May 17 00:30:24.458455 systemd[1]: run-netns-cni\x2dd6f1ce7d\x2df4bb\x2d22c1\x2d318e\x2dc468ee081cf8.mount: Deactivated successfully. May 17 00:30:24.524939 systemd-networkd[1605]: calib62504a881c: Link UP May 17 00:30:24.525822 systemd-networkd[1605]: calib62504a881c: Gained carrier May 17 00:30:24.543849 kubelet[3069]: I0517 00:30:24.543707 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-kbg69" podStartSLOduration=33.543653455 podStartE2EDuration="33.543653455s" podCreationTimestamp="2025-05-17 00:29:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:30:24.542933085 +0000 UTC m=+39.190866091" watchObservedRunningTime="2025-05-17 00:30:24.543653455 +0000 UTC m=+39.191586450" May 17 00:30:24.549302 containerd[1808]: 2025-05-17 00:30:24.471 [INFO][5549] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:30:24.549302 containerd[1808]: 2025-05-17 00:30:24.477 [INFO][5549] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--z8c5c-eth0 calico-apiserver-688cf69547- calico-apiserver 47a8b75a-0ae5-4d27-9954-209696bc0aa7 912 0 2025-05-17 00:29:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:688cf69547 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-n-65a4af4639 calico-apiserver-688cf69547-z8c5c eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib62504a881c [] [] }} ContainerID="aed2b6305541a2a4be83b527ed9d78620c9d793e02b294348924e45ed1ee006d" Namespace="calico-apiserver" Pod="calico-apiserver-688cf69547-z8c5c" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--z8c5c-" May 17 00:30:24.549302 containerd[1808]: 2025-05-17 00:30:24.477 [INFO][5549] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aed2b6305541a2a4be83b527ed9d78620c9d793e02b294348924e45ed1ee006d" Namespace="calico-apiserver" Pod="calico-apiserver-688cf69547-z8c5c" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--z8c5c-eth0" May 17 00:30:24.549302 containerd[1808]: 2025-05-17 00:30:24.489 [INFO][5569] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aed2b6305541a2a4be83b527ed9d78620c9d793e02b294348924e45ed1ee006d" HandleID="k8s-pod-network.aed2b6305541a2a4be83b527ed9d78620c9d793e02b294348924e45ed1ee006d" Workload="ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--z8c5c-eth0" May 17 00:30:24.549302 containerd[1808]: 2025-05-17 00:30:24.489 [INFO][5569] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aed2b6305541a2a4be83b527ed9d78620c9d793e02b294348924e45ed1ee006d" HandleID="k8s-pod-network.aed2b6305541a2a4be83b527ed9d78620c9d793e02b294348924e45ed1ee006d" Workload="ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--z8c5c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fc50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-n-65a4af4639", "pod":"calico-apiserver-688cf69547-z8c5c", "timestamp":"2025-05-17 00:30:24.489269753 +0000 UTC"}, Hostname:"ci-4081.3.3-n-65a4af4639", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:30:24.549302 containerd[1808]: 2025-05-17 00:30:24.489 [INFO][5569] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:24.549302 containerd[1808]: 2025-05-17 00:30:24.489 [INFO][5569] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:24.549302 containerd[1808]: 2025-05-17 00:30:24.489 [INFO][5569] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-65a4af4639' May 17 00:30:24.549302 containerd[1808]: 2025-05-17 00:30:24.493 [INFO][5569] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aed2b6305541a2a4be83b527ed9d78620c9d793e02b294348924e45ed1ee006d" host="ci-4081.3.3-n-65a4af4639" May 17 00:30:24.549302 containerd[1808]: 2025-05-17 00:30:24.495 [INFO][5569] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-65a4af4639" May 17 00:30:24.549302 containerd[1808]: 2025-05-17 00:30:24.498 [INFO][5569] ipam/ipam.go 511: Trying affinity for 192.168.48.128/26 host="ci-4081.3.3-n-65a4af4639" May 17 00:30:24.549302 containerd[1808]: 2025-05-17 00:30:24.499 [INFO][5569] ipam/ipam.go 158: Attempting to load block cidr=192.168.48.128/26 host="ci-4081.3.3-n-65a4af4639" May 17 00:30:24.549302 containerd[1808]: 2025-05-17 00:30:24.500 [INFO][5569] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.48.128/26 host="ci-4081.3.3-n-65a4af4639" May 17 00:30:24.549302 containerd[1808]: 2025-05-17 00:30:24.500 [INFO][5569] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.48.128/26 handle="k8s-pod-network.aed2b6305541a2a4be83b527ed9d78620c9d793e02b294348924e45ed1ee006d" host="ci-4081.3.3-n-65a4af4639" May 17 00:30:24.549302 containerd[1808]: 2025-05-17 00:30:24.501 [INFO][5569] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.aed2b6305541a2a4be83b527ed9d78620c9d793e02b294348924e45ed1ee006d May 17 00:30:24.549302 containerd[1808]: 2025-05-17 00:30:24.505 [INFO][5569] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.48.128/26 handle="k8s-pod-network.aed2b6305541a2a4be83b527ed9d78620c9d793e02b294348924e45ed1ee006d" host="ci-4081.3.3-n-65a4af4639" May 17 00:30:24.549302 containerd[1808]: 2025-05-17 00:30:24.517 [INFO][5569] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.48.132/26] block=192.168.48.128/26 handle="k8s-pod-network.aed2b6305541a2a4be83b527ed9d78620c9d793e02b294348924e45ed1ee006d" host="ci-4081.3.3-n-65a4af4639" May 17 00:30:24.549302 containerd[1808]: 2025-05-17 00:30:24.517 [INFO][5569] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.48.132/26] handle="k8s-pod-network.aed2b6305541a2a4be83b527ed9d78620c9d793e02b294348924e45ed1ee006d" host="ci-4081.3.3-n-65a4af4639" May 17 00:30:24.549302 containerd[1808]: 2025-05-17 00:30:24.517 [INFO][5569] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:24.549302 containerd[1808]: 2025-05-17 00:30:24.517 [INFO][5569] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.48.132/26] IPv6=[] ContainerID="aed2b6305541a2a4be83b527ed9d78620c9d793e02b294348924e45ed1ee006d" HandleID="k8s-pod-network.aed2b6305541a2a4be83b527ed9d78620c9d793e02b294348924e45ed1ee006d" Workload="ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--z8c5c-eth0" May 17 00:30:24.551206 containerd[1808]: 2025-05-17 00:30:24.521 [INFO][5549] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aed2b6305541a2a4be83b527ed9d78620c9d793e02b294348924e45ed1ee006d" Namespace="calico-apiserver" Pod="calico-apiserver-688cf69547-z8c5c" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--z8c5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--z8c5c-eth0", GenerateName:"calico-apiserver-688cf69547-", Namespace:"calico-apiserver", SelfLink:"", UID:"47a8b75a-0ae5-4d27-9954-209696bc0aa7", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 29, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"688cf69547", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-65a4af4639", ContainerID:"", Pod:"calico-apiserver-688cf69547-z8c5c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.48.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib62504a881c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:30:24.551206 containerd[1808]: 2025-05-17 00:30:24.521 [INFO][5549] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.48.132/32] ContainerID="aed2b6305541a2a4be83b527ed9d78620c9d793e02b294348924e45ed1ee006d" Namespace="calico-apiserver" Pod="calico-apiserver-688cf69547-z8c5c" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--z8c5c-eth0" May 17 00:30:24.551206 containerd[1808]: 2025-05-17 00:30:24.521 [INFO][5549] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib62504a881c ContainerID="aed2b6305541a2a4be83b527ed9d78620c9d793e02b294348924e45ed1ee006d" Namespace="calico-apiserver" Pod="calico-apiserver-688cf69547-z8c5c" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--z8c5c-eth0" May 17 00:30:24.551206 containerd[1808]: 2025-05-17 00:30:24.525 [INFO][5549] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aed2b6305541a2a4be83b527ed9d78620c9d793e02b294348924e45ed1ee006d" Namespace="calico-apiserver" Pod="calico-apiserver-688cf69547-z8c5c" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--z8c5c-eth0" May 17 00:30:24.551206 containerd[1808]: 2025-05-17 00:30:24.526 [INFO][5549] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aed2b6305541a2a4be83b527ed9d78620c9d793e02b294348924e45ed1ee006d" Namespace="calico-apiserver" Pod="calico-apiserver-688cf69547-z8c5c" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--z8c5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--z8c5c-eth0", GenerateName:"calico-apiserver-688cf69547-", Namespace:"calico-apiserver", SelfLink:"", UID:"47a8b75a-0ae5-4d27-9954-209696bc0aa7", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 29, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"688cf69547", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-65a4af4639", ContainerID:"aed2b6305541a2a4be83b527ed9d78620c9d793e02b294348924e45ed1ee006d", Pod:"calico-apiserver-688cf69547-z8c5c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.48.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib62504a881c", MAC:"2a:e3:8a:69:93:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:30:24.551206 containerd[1808]: 2025-05-17 00:30:24.546 [INFO][5549] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aed2b6305541a2a4be83b527ed9d78620c9d793e02b294348924e45ed1ee006d" Namespace="calico-apiserver" Pod="calico-apiserver-688cf69547-z8c5c" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--z8c5c-eth0" May 17 00:30:24.561533 containerd[1808]: time="2025-05-17T00:30:24.561224277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:30:24.561533 containerd[1808]: time="2025-05-17T00:30:24.561497769Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:30:24.561533 containerd[1808]: time="2025-05-17T00:30:24.561507576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:24.561645 containerd[1808]: time="2025-05-17T00:30:24.561592495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:24.577837 systemd[1]: Started cri-containerd-aed2b6305541a2a4be83b527ed9d78620c9d793e02b294348924e45ed1ee006d.scope - libcontainer container aed2b6305541a2a4be83b527ed9d78620c9d793e02b294348924e45ed1ee006d. May 17 00:30:24.623239 containerd[1808]: time="2025-05-17T00:30:24.623214762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-688cf69547-z8c5c,Uid:47a8b75a-0ae5-4d27-9954-209696bc0aa7,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"aed2b6305541a2a4be83b527ed9d78620c9d793e02b294348924e45ed1ee006d\"" May 17 00:30:24.820453 kernel: bpftool[5730]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 17 00:30:24.974756 systemd-networkd[1605]: vxlan.calico: Link UP May 17 00:30:24.974758 systemd-networkd[1605]: vxlan.calico: Gained carrier May 17 00:30:25.255775 containerd[1808]: time="2025-05-17T00:30:25.255685107Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:25.255922 containerd[1808]: time="2025-05-17T00:30:25.255868634Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8758390" May 17 00:30:25.256181 containerd[1808]: time="2025-05-17T00:30:25.256147477Z" level=info msg="ImageCreate event name:\"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:25.257430 containerd[1808]: time="2025-05-17T00:30:25.257402379Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:25.257773 containerd[1808]: time="2025-05-17T00:30:25.257731401Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"10251093\" in 1.712553941s" May 17 00:30:25.257773 containerd[1808]: time="2025-05-17T00:30:25.257746414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 17 00:30:25.258274 containerd[1808]: time="2025-05-17T00:30:25.258232375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:30:25.258755 containerd[1808]: time="2025-05-17T00:30:25.258742091Z" level=info msg="CreateContainer within sandbox \"0c3fb46820362ecb9d7d856d6a392bf87a99903b6e236dbbb08a2b073174265f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 17 00:30:25.263588 containerd[1808]: time="2025-05-17T00:30:25.263573187Z" level=info msg="CreateContainer within sandbox \"0c3fb46820362ecb9d7d856d6a392bf87a99903b6e236dbbb08a2b073174265f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ca7a3c53a413c6cb3e6cd4a86f3fc2ad16dc54d7c7d49ee6668da29444d6399e\"" May 17 00:30:25.263846 containerd[1808]: time="2025-05-17T00:30:25.263800798Z" level=info msg="StartContainer for \"ca7a3c53a413c6cb3e6cd4a86f3fc2ad16dc54d7c7d49ee6668da29444d6399e\"" May 17 00:30:25.289712 systemd[1]: Started cri-containerd-ca7a3c53a413c6cb3e6cd4a86f3fc2ad16dc54d7c7d49ee6668da29444d6399e.scope - libcontainer container ca7a3c53a413c6cb3e6cd4a86f3fc2ad16dc54d7c7d49ee6668da29444d6399e. May 17 00:30:25.291423 systemd-networkd[1605]: cali23226e6534b: Gained IPv6LL May 17 00:30:25.304511 containerd[1808]: time="2025-05-17T00:30:25.304481812Z" level=info msg="StartContainer for \"ca7a3c53a413c6cb3e6cd4a86f3fc2ad16dc54d7c7d49ee6668da29444d6399e\" returns successfully" May 17 00:30:25.354580 systemd-networkd[1605]: califef44539932: Gained IPv6LL May 17 00:30:25.393597 containerd[1808]: time="2025-05-17T00:30:25.393534025Z" level=info msg="StopPodSandbox for \"561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2\"" May 17 00:30:25.394805 containerd[1808]: time="2025-05-17T00:30:25.394734526Z" level=info msg="StopPodSandbox for \"3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2\"" May 17 00:30:25.394988 containerd[1808]: time="2025-05-17T00:30:25.394741669Z" level=info msg="StopPodSandbox for \"216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce\"" May 17 00:30:25.395143 containerd[1808]: time="2025-05-17T00:30:25.394997477Z" level=info msg="StopPodSandbox for \"930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7\"" May 17 00:30:25.473027 containerd[1808]: 2025-05-17 00:30:25.453 [INFO][5917] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce" May 17 00:30:25.473027 containerd[1808]: 2025-05-17 00:30:25.453 [INFO][5917] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce" iface="eth0" netns="/var/run/netns/cni-b1b5bcf5-5d26-477c-1b9b-4df59dbe0b41" May 17 00:30:25.473027 containerd[1808]: 2025-05-17 00:30:25.454 [INFO][5917] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce" iface="eth0" netns="/var/run/netns/cni-b1b5bcf5-5d26-477c-1b9b-4df59dbe0b41" May 17 00:30:25.473027 containerd[1808]: 2025-05-17 00:30:25.454 [INFO][5917] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce" iface="eth0" netns="/var/run/netns/cni-b1b5bcf5-5d26-477c-1b9b-4df59dbe0b41" May 17 00:30:25.473027 containerd[1808]: 2025-05-17 00:30:25.454 [INFO][5917] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce" May 17 00:30:25.473027 containerd[1808]: 2025-05-17 00:30:25.454 [INFO][5917] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce" May 17 00:30:25.473027 containerd[1808]: 2025-05-17 00:30:25.466 [INFO][5968] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce" HandleID="k8s-pod-network.216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce" Workload="ci--4081.3.3--n--65a4af4639-k8s-goldmane--8f77d7b6c--st7xr-eth0" May 17 00:30:25.473027 containerd[1808]: 2025-05-17 00:30:25.466 [INFO][5968] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:25.473027 containerd[1808]: 2025-05-17 00:30:25.466 [INFO][5968] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:25.473027 containerd[1808]: 2025-05-17 00:30:25.471 [WARNING][5968] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce" HandleID="k8s-pod-network.216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce" Workload="ci--4081.3.3--n--65a4af4639-k8s-goldmane--8f77d7b6c--st7xr-eth0" May 17 00:30:25.473027 containerd[1808]: 2025-05-17 00:30:25.471 [INFO][5968] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce" HandleID="k8s-pod-network.216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce" Workload="ci--4081.3.3--n--65a4af4639-k8s-goldmane--8f77d7b6c--st7xr-eth0" May 17 00:30:25.473027 containerd[1808]: 2025-05-17 00:30:25.471 [INFO][5968] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:25.473027 containerd[1808]: 2025-05-17 00:30:25.472 [INFO][5917] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce" May 17 00:30:25.473663 containerd[1808]: time="2025-05-17T00:30:25.473166167Z" level=info msg="TearDown network for sandbox \"216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce\" successfully" May 17 00:30:25.473663 containerd[1808]: time="2025-05-17T00:30:25.473187431Z" level=info msg="StopPodSandbox for \"216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce\" returns successfully" May 17 00:30:25.473663 containerd[1808]: time="2025-05-17T00:30:25.473540720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-st7xr,Uid:79106d86-9187-44ab-a1d7-9ef14c711cf6,Namespace:calico-system,Attempt:1,}" May 17 00:30:25.474900 systemd[1]: run-netns-cni\x2db1b5bcf5\x2d5d26\x2d477c\x2d1b9b\x2d4df59dbe0b41.mount: Deactivated successfully. May 17 00:30:25.478255 containerd[1808]: 2025-05-17 00:30:25.455 [INFO][5916] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2" May 17 00:30:25.478255 containerd[1808]: 2025-05-17 00:30:25.455 [INFO][5916] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2" iface="eth0" netns="/var/run/netns/cni-7a0f5b50-dc49-ce2b-552e-1acd8dfe4f0f" May 17 00:30:25.478255 containerd[1808]: 2025-05-17 00:30:25.455 [INFO][5916] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2" iface="eth0" netns="/var/run/netns/cni-7a0f5b50-dc49-ce2b-552e-1acd8dfe4f0f" May 17 00:30:25.478255 containerd[1808]: 2025-05-17 00:30:25.455 [INFO][5916] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2" iface="eth0" netns="/var/run/netns/cni-7a0f5b50-dc49-ce2b-552e-1acd8dfe4f0f" May 17 00:30:25.478255 containerd[1808]: 2025-05-17 00:30:25.455 [INFO][5916] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2" May 17 00:30:25.478255 containerd[1808]: 2025-05-17 00:30:25.455 [INFO][5916] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2" May 17 00:30:25.478255 containerd[1808]: 2025-05-17 00:30:25.466 [INFO][5976] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2" HandleID="k8s-pod-network.561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2" Workload="ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--2nkhg-eth0" May 17 00:30:25.478255 containerd[1808]: 2025-05-17 00:30:25.466 [INFO][5976] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:25.478255 containerd[1808]: 2025-05-17 00:30:25.471 [INFO][5976] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:25.478255 containerd[1808]: 2025-05-17 00:30:25.475 [WARNING][5976] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2" HandleID="k8s-pod-network.561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2" Workload="ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--2nkhg-eth0" May 17 00:30:25.478255 containerd[1808]: 2025-05-17 00:30:25.475 [INFO][5976] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2" HandleID="k8s-pod-network.561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2" Workload="ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--2nkhg-eth0" May 17 00:30:25.478255 containerd[1808]: 2025-05-17 00:30:25.476 [INFO][5976] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:25.478255 containerd[1808]: 2025-05-17 00:30:25.476 [INFO][5916] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2" May 17 00:30:25.478677 containerd[1808]: time="2025-05-17T00:30:25.478311129Z" level=info msg="TearDown network for sandbox \"561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2\" successfully" May 17 00:30:25.478677 containerd[1808]: time="2025-05-17T00:30:25.478333720Z" level=info msg="StopPodSandbox for \"561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2\" returns successfully" May 17 00:30:25.478719 containerd[1808]: time="2025-05-17T00:30:25.478692294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2nkhg,Uid:441d803b-df59-4a3e-b55c-43834c087e2b,Namespace:kube-system,Attempt:1,}" May 17 00:30:25.482227 systemd[1]: run-netns-cni\x2d7a0f5b50\x2ddc49\x2dce2b\x2d552e\x2d1acd8dfe4f0f.mount: Deactivated successfully. May 17 00:30:25.483110 containerd[1808]: 2025-05-17 00:30:25.454 [INFO][5919] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7" May 17 00:30:25.483110 containerd[1808]: 2025-05-17 00:30:25.454 [INFO][5919] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7" iface="eth0" netns="/var/run/netns/cni-5145f608-57bc-67c3-3b96-684280059335" May 17 00:30:25.483110 containerd[1808]: 2025-05-17 00:30:25.454 [INFO][5919] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7" iface="eth0" netns="/var/run/netns/cni-5145f608-57bc-67c3-3b96-684280059335" May 17 00:30:25.483110 containerd[1808]: 2025-05-17 00:30:25.454 [INFO][5919] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7" iface="eth0" netns="/var/run/netns/cni-5145f608-57bc-67c3-3b96-684280059335" May 17 00:30:25.483110 containerd[1808]: 2025-05-17 00:30:25.454 [INFO][5919] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7" May 17 00:30:25.483110 containerd[1808]: 2025-05-17 00:30:25.454 [INFO][5919] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7" May 17 00:30:25.483110 containerd[1808]: 2025-05-17 00:30:25.466 [INFO][5970] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7" HandleID="k8s-pod-network.930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7" Workload="ci--4081.3.3--n--65a4af4639-k8s-calico--kube--controllers--c7b857f57--2t2zm-eth0" May 17 00:30:25.483110 containerd[1808]: 2025-05-17 00:30:25.466 [INFO][5970] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:25.483110 containerd[1808]: 2025-05-17 00:30:25.476 [INFO][5970] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:25.483110 containerd[1808]: 2025-05-17 00:30:25.479 [WARNING][5970] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7" HandleID="k8s-pod-network.930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7" Workload="ci--4081.3.3--n--65a4af4639-k8s-calico--kube--controllers--c7b857f57--2t2zm-eth0" May 17 00:30:25.483110 containerd[1808]: 2025-05-17 00:30:25.479 [INFO][5970] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7" HandleID="k8s-pod-network.930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7" Workload="ci--4081.3.3--n--65a4af4639-k8s-calico--kube--controllers--c7b857f57--2t2zm-eth0" May 17 00:30:25.483110 containerd[1808]: 2025-05-17 00:30:25.481 [INFO][5970] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:25.483110 containerd[1808]: 2025-05-17 00:30:25.482 [INFO][5919] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7" May 17 00:30:25.483555 containerd[1808]: time="2025-05-17T00:30:25.483185930Z" level=info msg="TearDown network for sandbox \"930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7\" successfully" May 17 00:30:25.483555 containerd[1808]: time="2025-05-17T00:30:25.483202063Z" level=info msg="StopPodSandbox for \"930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7\" returns successfully" May 17 00:30:25.483635 containerd[1808]: time="2025-05-17T00:30:25.483620542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c7b857f57-2t2zm,Uid:7e5ca975-ef7f-414a-942f-bcd57dc8d07a,Namespace:calico-system,Attempt:1,}" May 17 00:30:25.485568 systemd[1]: run-netns-cni\x2d5145f608\x2d57bc\x2d67c3\x2d3b96\x2d684280059335.mount: Deactivated successfully. May 17 00:30:25.488156 containerd[1808]: 2025-05-17 00:30:25.456 [INFO][5918] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2" May 17 00:30:25.488156 containerd[1808]: 2025-05-17 00:30:25.456 [INFO][5918] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2" iface="eth0" netns="/var/run/netns/cni-08ba5335-27cd-c4b1-1b47-cc2d4399f733" May 17 00:30:25.488156 containerd[1808]: 2025-05-17 00:30:25.456 [INFO][5918] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2" iface="eth0" netns="/var/run/netns/cni-08ba5335-27cd-c4b1-1b47-cc2d4399f733" May 17 00:30:25.488156 containerd[1808]: 2025-05-17 00:30:25.456 [INFO][5918] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2" iface="eth0" netns="/var/run/netns/cni-08ba5335-27cd-c4b1-1b47-cc2d4399f733" May 17 00:30:25.488156 containerd[1808]: 2025-05-17 00:30:25.456 [INFO][5918] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2" May 17 00:30:25.488156 containerd[1808]: 2025-05-17 00:30:25.456 [INFO][5918] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2" May 17 00:30:25.488156 containerd[1808]: 2025-05-17 00:30:25.466 [INFO][5981] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2" HandleID="k8s-pod-network.3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2" Workload="ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--wc762-eth0" May 17 00:30:25.488156 containerd[1808]: 2025-05-17 00:30:25.466 [INFO][5981] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:25.488156 containerd[1808]: 2025-05-17 00:30:25.481 [INFO][5981] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:25.488156 containerd[1808]: 2025-05-17 00:30:25.485 [WARNING][5981] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2" HandleID="k8s-pod-network.3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2" Workload="ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--wc762-eth0" May 17 00:30:25.488156 containerd[1808]: 2025-05-17 00:30:25.485 [INFO][5981] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2" HandleID="k8s-pod-network.3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2" Workload="ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--wc762-eth0" May 17 00:30:25.488156 containerd[1808]: 2025-05-17 00:30:25.486 [INFO][5981] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:25.488156 containerd[1808]: 2025-05-17 00:30:25.487 [INFO][5918] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2" May 17 00:30:25.488535 containerd[1808]: time="2025-05-17T00:30:25.488254613Z" level=info msg="TearDown network for sandbox \"3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2\" successfully" May 17 00:30:25.488535 containerd[1808]: time="2025-05-17T00:30:25.488271363Z" level=info msg="StopPodSandbox for \"3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2\" returns successfully" May 17 00:30:25.488662 containerd[1808]: time="2025-05-17T00:30:25.488645143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-688cf69547-wc762,Uid:93f363c3-f873-4d07-a216-fd2fe6414a28,Namespace:calico-apiserver,Attempt:1,}" May 17 00:30:25.531391 systemd-networkd[1605]: calib964d06d6eb: Link UP May 17 00:30:25.531638 systemd-networkd[1605]: calib964d06d6eb: Gained carrier May 17 00:30:25.537400 containerd[1808]: 2025-05-17 00:30:25.498 [INFO][6029] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--65a4af4639-k8s-goldmane--8f77d7b6c--st7xr-eth0 goldmane-8f77d7b6c- calico-system 79106d86-9187-44ab-a1d7-9ef14c711cf6 931 0 2025-05-17 00:30:01 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:8f77d7b6c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.3-n-65a4af4639 goldmane-8f77d7b6c-st7xr eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib964d06d6eb [] [] }} ContainerID="b2ce52732e1d627f09b4c9dec0df316feb725403e7dcac24b5df2dc2f69e0b04" Namespace="calico-system" Pod="goldmane-8f77d7b6c-st7xr" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-goldmane--8f77d7b6c--st7xr-" May 17 00:30:25.537400 containerd[1808]: 2025-05-17 00:30:25.498 [INFO][6029] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b2ce52732e1d627f09b4c9dec0df316feb725403e7dcac24b5df2dc2f69e0b04" Namespace="calico-system" Pod="goldmane-8f77d7b6c-st7xr" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-goldmane--8f77d7b6c--st7xr-eth0" May 17 00:30:25.537400 containerd[1808]: 2025-05-17 00:30:25.511 [INFO][6114] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b2ce52732e1d627f09b4c9dec0df316feb725403e7dcac24b5df2dc2f69e0b04" HandleID="k8s-pod-network.b2ce52732e1d627f09b4c9dec0df316feb725403e7dcac24b5df2dc2f69e0b04" Workload="ci--4081.3.3--n--65a4af4639-k8s-goldmane--8f77d7b6c--st7xr-eth0" May 17 00:30:25.537400 containerd[1808]: 2025-05-17 00:30:25.511 [INFO][6114] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b2ce52732e1d627f09b4c9dec0df316feb725403e7dcac24b5df2dc2f69e0b04" HandleID="k8s-pod-network.b2ce52732e1d627f09b4c9dec0df316feb725403e7dcac24b5df2dc2f69e0b04" Workload="ci--4081.3.3--n--65a4af4639-k8s-goldmane--8f77d7b6c--st7xr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f710), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-n-65a4af4639", "pod":"goldmane-8f77d7b6c-st7xr", "timestamp":"2025-05-17 00:30:25.51152772 +0000 UTC"}, Hostname:"ci-4081.3.3-n-65a4af4639", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:30:25.537400 containerd[1808]: 2025-05-17 00:30:25.511 [INFO][6114] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:25.537400 containerd[1808]: 2025-05-17 00:30:25.511 [INFO][6114] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:25.537400 containerd[1808]: 2025-05-17 00:30:25.511 [INFO][6114] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-65a4af4639' May 17 00:30:25.537400 containerd[1808]: 2025-05-17 00:30:25.515 [INFO][6114] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b2ce52732e1d627f09b4c9dec0df316feb725403e7dcac24b5df2dc2f69e0b04" host="ci-4081.3.3-n-65a4af4639" May 17 00:30:25.537400 containerd[1808]: 2025-05-17 00:30:25.517 [INFO][6114] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-65a4af4639" May 17 00:30:25.537400 containerd[1808]: 2025-05-17 00:30:25.520 [INFO][6114] ipam/ipam.go 511: Trying affinity for 192.168.48.128/26 host="ci-4081.3.3-n-65a4af4639" May 17 00:30:25.537400 containerd[1808]: 2025-05-17 00:30:25.521 [INFO][6114] ipam/ipam.go 158: Attempting to load block cidr=192.168.48.128/26 host="ci-4081.3.3-n-65a4af4639" May 17 00:30:25.537400 containerd[1808]: 2025-05-17 00:30:25.522 [INFO][6114] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.48.128/26 host="ci-4081.3.3-n-65a4af4639" May 17 00:30:25.537400 containerd[1808]: 2025-05-17 00:30:25.522 [INFO][6114] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.48.128/26 handle="k8s-pod-network.b2ce52732e1d627f09b4c9dec0df316feb725403e7dcac24b5df2dc2f69e0b04" host="ci-4081.3.3-n-65a4af4639" May 17 00:30:25.537400 containerd[1808]: 2025-05-17 00:30:25.523 [INFO][6114] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b2ce52732e1d627f09b4c9dec0df316feb725403e7dcac24b5df2dc2f69e0b04 May 17 00:30:25.537400 containerd[1808]: 2025-05-17 00:30:25.525 [INFO][6114] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.48.128/26 handle="k8s-pod-network.b2ce52732e1d627f09b4c9dec0df316feb725403e7dcac24b5df2dc2f69e0b04" host="ci-4081.3.3-n-65a4af4639" May 17 00:30:25.537400 containerd[1808]: 2025-05-17 00:30:25.529 [INFO][6114] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.48.133/26] block=192.168.48.128/26 handle="k8s-pod-network.b2ce52732e1d627f09b4c9dec0df316feb725403e7dcac24b5df2dc2f69e0b04" host="ci-4081.3.3-n-65a4af4639" May 17 00:30:25.537400 containerd[1808]: 2025-05-17 00:30:25.529 [INFO][6114] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.48.133/26] handle="k8s-pod-network.b2ce52732e1d627f09b4c9dec0df316feb725403e7dcac24b5df2dc2f69e0b04" host="ci-4081.3.3-n-65a4af4639" May 17 00:30:25.537400 containerd[1808]: 2025-05-17 00:30:25.529 [INFO][6114] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:25.537400 containerd[1808]: 2025-05-17 00:30:25.529 [INFO][6114] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.48.133/26] IPv6=[] ContainerID="b2ce52732e1d627f09b4c9dec0df316feb725403e7dcac24b5df2dc2f69e0b04" HandleID="k8s-pod-network.b2ce52732e1d627f09b4c9dec0df316feb725403e7dcac24b5df2dc2f69e0b04" Workload="ci--4081.3.3--n--65a4af4639-k8s-goldmane--8f77d7b6c--st7xr-eth0" May 17 00:30:25.537893 containerd[1808]: 2025-05-17 00:30:25.530 [INFO][6029] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b2ce52732e1d627f09b4c9dec0df316feb725403e7dcac24b5df2dc2f69e0b04" Namespace="calico-system" Pod="goldmane-8f77d7b6c-st7xr" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-goldmane--8f77d7b6c--st7xr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--65a4af4639-k8s-goldmane--8f77d7b6c--st7xr-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"79106d86-9187-44ab-a1d7-9ef14c711cf6", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-65a4af4639", ContainerID:"", Pod:"goldmane-8f77d7b6c-st7xr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.48.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib964d06d6eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:30:25.537893 containerd[1808]: 2025-05-17 00:30:25.530 [INFO][6029] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.48.133/32] ContainerID="b2ce52732e1d627f09b4c9dec0df316feb725403e7dcac24b5df2dc2f69e0b04" Namespace="calico-system" Pod="goldmane-8f77d7b6c-st7xr" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-goldmane--8f77d7b6c--st7xr-eth0" May 17 00:30:25.537893 containerd[1808]: 2025-05-17 00:30:25.530 [INFO][6029] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib964d06d6eb ContainerID="b2ce52732e1d627f09b4c9dec0df316feb725403e7dcac24b5df2dc2f69e0b04" Namespace="calico-system" Pod="goldmane-8f77d7b6c-st7xr" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-goldmane--8f77d7b6c--st7xr-eth0" May 17 00:30:25.537893 containerd[1808]: 2025-05-17 00:30:25.531 [INFO][6029] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b2ce52732e1d627f09b4c9dec0df316feb725403e7dcac24b5df2dc2f69e0b04" Namespace="calico-system" Pod="goldmane-8f77d7b6c-st7xr" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-goldmane--8f77d7b6c--st7xr-eth0" May 17 00:30:25.537893 containerd[1808]: 2025-05-17 00:30:25.532 [INFO][6029] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b2ce52732e1d627f09b4c9dec0df316feb725403e7dcac24b5df2dc2f69e0b04" Namespace="calico-system" Pod="goldmane-8f77d7b6c-st7xr" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-goldmane--8f77d7b6c--st7xr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--65a4af4639-k8s-goldmane--8f77d7b6c--st7xr-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"79106d86-9187-44ab-a1d7-9ef14c711cf6", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-65a4af4639", ContainerID:"b2ce52732e1d627f09b4c9dec0df316feb725403e7dcac24b5df2dc2f69e0b04", Pod:"goldmane-8f77d7b6c-st7xr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.48.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib964d06d6eb", MAC:"52:d5:bc:01:8b:19", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:30:25.537893 containerd[1808]: 2025-05-17 00:30:25.536 [INFO][6029] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b2ce52732e1d627f09b4c9dec0df316feb725403e7dcac24b5df2dc2f69e0b04" Namespace="calico-system" Pod="goldmane-8f77d7b6c-st7xr" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-goldmane--8f77d7b6c--st7xr-eth0" May 17 00:30:25.545433 containerd[1808]: time="2025-05-17T00:30:25.545193618Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:30:25.545433 containerd[1808]: time="2025-05-17T00:30:25.545410146Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:30:25.545433 containerd[1808]: time="2025-05-17T00:30:25.545417947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:25.545557 containerd[1808]: time="2025-05-17T00:30:25.545469223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:25.561612 systemd[1]: Started cri-containerd-b2ce52732e1d627f09b4c9dec0df316feb725403e7dcac24b5df2dc2f69e0b04.scope - libcontainer container b2ce52732e1d627f09b4c9dec0df316feb725403e7dcac24b5df2dc2f69e0b04. May 17 00:30:25.583605 containerd[1808]: time="2025-05-17T00:30:25.583581746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-st7xr,Uid:79106d86-9187-44ab-a1d7-9ef14c711cf6,Namespace:calico-system,Attempt:1,} returns sandbox id \"b2ce52732e1d627f09b4c9dec0df316feb725403e7dcac24b5df2dc2f69e0b04\"" May 17 00:30:25.677645 systemd-networkd[1605]: cali2b1e6a2f145: Link UP May 17 00:30:25.678467 systemd-networkd[1605]: cali2b1e6a2f145: Gained carrier May 17 00:30:25.697062 containerd[1808]: 2025-05-17 00:30:25.500 [INFO][6043] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--2nkhg-eth0 coredns-7c65d6cfc9- kube-system 441d803b-df59-4a3e-b55c-43834c087e2b 933 0 2025-05-17 00:29:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.3-n-65a4af4639 coredns-7c65d6cfc9-2nkhg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2b1e6a2f145 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3917b3938fc7fd3f804c58f4104767769e489e73211b1f87eb775887e8ebbe44" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2nkhg" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--2nkhg-" May 17 00:30:25.697062 containerd[1808]: 2025-05-17 00:30:25.500 [INFO][6043] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3917b3938fc7fd3f804c58f4104767769e489e73211b1f87eb775887e8ebbe44" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2nkhg" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--2nkhg-eth0" May 17 00:30:25.697062 containerd[1808]: 2025-05-17 00:30:25.513 [INFO][6123] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3917b3938fc7fd3f804c58f4104767769e489e73211b1f87eb775887e8ebbe44" HandleID="k8s-pod-network.3917b3938fc7fd3f804c58f4104767769e489e73211b1f87eb775887e8ebbe44" Workload="ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--2nkhg-eth0" May 17 00:30:25.697062 containerd[1808]: 2025-05-17 00:30:25.513 [INFO][6123] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3917b3938fc7fd3f804c58f4104767769e489e73211b1f87eb775887e8ebbe44" HandleID="k8s-pod-network.3917b3938fc7fd3f804c58f4104767769e489e73211b1f87eb775887e8ebbe44" Workload="ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--2nkhg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f840), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.3-n-65a4af4639", "pod":"coredns-7c65d6cfc9-2nkhg", "timestamp":"2025-05-17 00:30:25.513666979 +0000 UTC"}, Hostname:"ci-4081.3.3-n-65a4af4639", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:30:25.697062 containerd[1808]: 2025-05-17 00:30:25.513 [INFO][6123] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:25.697062 containerd[1808]: 2025-05-17 00:30:25.529 [INFO][6123] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:25.697062 containerd[1808]: 2025-05-17 00:30:25.529 [INFO][6123] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-65a4af4639' May 17 00:30:25.697062 containerd[1808]: 2025-05-17 00:30:25.617 [INFO][6123] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3917b3938fc7fd3f804c58f4104767769e489e73211b1f87eb775887e8ebbe44" host="ci-4081.3.3-n-65a4af4639" May 17 00:30:25.697062 containerd[1808]: 2025-05-17 00:30:25.627 [INFO][6123] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-65a4af4639" May 17 00:30:25.697062 containerd[1808]: 2025-05-17 00:30:25.636 [INFO][6123] ipam/ipam.go 511: Trying affinity for 192.168.48.128/26 host="ci-4081.3.3-n-65a4af4639" May 17 00:30:25.697062 containerd[1808]: 2025-05-17 00:30:25.640 [INFO][6123] ipam/ipam.go 158: Attempting to load block cidr=192.168.48.128/26 host="ci-4081.3.3-n-65a4af4639" May 17 00:30:25.697062 containerd[1808]: 2025-05-17 00:30:25.645 [INFO][6123] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.48.128/26 host="ci-4081.3.3-n-65a4af4639" May 17 00:30:25.697062 containerd[1808]: 2025-05-17 00:30:25.645 [INFO][6123] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.48.128/26 handle="k8s-pod-network.3917b3938fc7fd3f804c58f4104767769e489e73211b1f87eb775887e8ebbe44" host="ci-4081.3.3-n-65a4af4639" May 17 00:30:25.697062 containerd[1808]: 2025-05-17 00:30:25.648 [INFO][6123] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3917b3938fc7fd3f804c58f4104767769e489e73211b1f87eb775887e8ebbe44 May 17 00:30:25.697062 containerd[1808]: 2025-05-17 00:30:25.656 [INFO][6123] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.48.128/26 handle="k8s-pod-network.3917b3938fc7fd3f804c58f4104767769e489e73211b1f87eb775887e8ebbe44" host="ci-4081.3.3-n-65a4af4639" May 17 00:30:25.697062 containerd[1808]: 2025-05-17 00:30:25.667 [INFO][6123] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.48.134/26] block=192.168.48.128/26 handle="k8s-pod-network.3917b3938fc7fd3f804c58f4104767769e489e73211b1f87eb775887e8ebbe44" host="ci-4081.3.3-n-65a4af4639" May 17 00:30:25.697062 containerd[1808]: 2025-05-17 00:30:25.667 [INFO][6123] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.48.134/26] handle="k8s-pod-network.3917b3938fc7fd3f804c58f4104767769e489e73211b1f87eb775887e8ebbe44" host="ci-4081.3.3-n-65a4af4639" May 17 00:30:25.697062 containerd[1808]: 2025-05-17 00:30:25.667 [INFO][6123] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:25.697062 containerd[1808]: 2025-05-17 00:30:25.667 [INFO][6123] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.48.134/26] IPv6=[] ContainerID="3917b3938fc7fd3f804c58f4104767769e489e73211b1f87eb775887e8ebbe44" HandleID="k8s-pod-network.3917b3938fc7fd3f804c58f4104767769e489e73211b1f87eb775887e8ebbe44" Workload="ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--2nkhg-eth0" May 17 00:30:25.697587 containerd[1808]: 2025-05-17 00:30:25.671 [INFO][6043] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3917b3938fc7fd3f804c58f4104767769e489e73211b1f87eb775887e8ebbe44" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2nkhg" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--2nkhg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--2nkhg-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"441d803b-df59-4a3e-b55c-43834c087e2b", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 29, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-65a4af4639", ContainerID:"", Pod:"coredns-7c65d6cfc9-2nkhg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.48.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2b1e6a2f145", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:30:25.697587 containerd[1808]: 2025-05-17 00:30:25.671 [INFO][6043] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.48.134/32] ContainerID="3917b3938fc7fd3f804c58f4104767769e489e73211b1f87eb775887e8ebbe44" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2nkhg" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--2nkhg-eth0" May 17 00:30:25.697587 containerd[1808]: 2025-05-17 00:30:25.671 [INFO][6043] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2b1e6a2f145 ContainerID="3917b3938fc7fd3f804c58f4104767769e489e73211b1f87eb775887e8ebbe44" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2nkhg" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--2nkhg-eth0" May 17 00:30:25.697587 containerd[1808]: 2025-05-17 00:30:25.679 [INFO][6043] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3917b3938fc7fd3f804c58f4104767769e489e73211b1f87eb775887e8ebbe44" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2nkhg" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--2nkhg-eth0" May 17 00:30:25.697587 containerd[1808]: 2025-05-17 00:30:25.680 [INFO][6043] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3917b3938fc7fd3f804c58f4104767769e489e73211b1f87eb775887e8ebbe44" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2nkhg" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--2nkhg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--2nkhg-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"441d803b-df59-4a3e-b55c-43834c087e2b", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 29, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-65a4af4639", ContainerID:"3917b3938fc7fd3f804c58f4104767769e489e73211b1f87eb775887e8ebbe44", Pod:"coredns-7c65d6cfc9-2nkhg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.48.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2b1e6a2f145", MAC:"02:d9:3f:e1:71:ae", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:30:25.697587 containerd[1808]: 2025-05-17 00:30:25.696 [INFO][6043] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3917b3938fc7fd3f804c58f4104767769e489e73211b1f87eb775887e8ebbe44" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2nkhg" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--2nkhg-eth0" May 17 00:30:25.705986 containerd[1808]: time="2025-05-17T00:30:25.705916518Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:30:25.705986 containerd[1808]: time="2025-05-17T00:30:25.705951176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:30:25.705986 containerd[1808]: time="2025-05-17T00:30:25.705958206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:25.706114 containerd[1808]: time="2025-05-17T00:30:25.706004451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:25.731463 systemd[1]: Started cri-containerd-3917b3938fc7fd3f804c58f4104767769e489e73211b1f87eb775887e8ebbe44.scope - libcontainer container 3917b3938fc7fd3f804c58f4104767769e489e73211b1f87eb775887e8ebbe44. May 17 00:30:25.742905 systemd-networkd[1605]: cali47e7886e210: Link UP May 17 00:30:25.743034 systemd-networkd[1605]: cali47e7886e210: Gained carrier May 17 00:30:25.748256 containerd[1808]: 2025-05-17 00:30:25.504 [INFO][6064] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--65a4af4639-k8s-calico--kube--controllers--c7b857f57--2t2zm-eth0 calico-kube-controllers-c7b857f57- calico-system 7e5ca975-ef7f-414a-942f-bcd57dc8d07a 932 0 2025-05-17 00:30:01 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:c7b857f57 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.3-n-65a4af4639 calico-kube-controllers-c7b857f57-2t2zm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali47e7886e210 [] [] }} ContainerID="87a5045fe091cdb0c1b9f1c4342551dd50cacb1924ce20b10d20d7d476bea3b5" Namespace="calico-system" Pod="calico-kube-controllers-c7b857f57-2t2zm" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-calico--kube--controllers--c7b857f57--2t2zm-" May 17 00:30:25.748256 containerd[1808]: 2025-05-17 00:30:25.505 [INFO][6064] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="87a5045fe091cdb0c1b9f1c4342551dd50cacb1924ce20b10d20d7d476bea3b5" Namespace="calico-system" Pod="calico-kube-controllers-c7b857f57-2t2zm" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-calico--kube--controllers--c7b857f57--2t2zm-eth0" May 17 00:30:25.748256 containerd[1808]: 2025-05-17 00:30:25.518 [INFO][6144] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="87a5045fe091cdb0c1b9f1c4342551dd50cacb1924ce20b10d20d7d476bea3b5" HandleID="k8s-pod-network.87a5045fe091cdb0c1b9f1c4342551dd50cacb1924ce20b10d20d7d476bea3b5" Workload="ci--4081.3.3--n--65a4af4639-k8s-calico--kube--controllers--c7b857f57--2t2zm-eth0" May 17 00:30:25.748256 containerd[1808]: 2025-05-17 00:30:25.518 [INFO][6144] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="87a5045fe091cdb0c1b9f1c4342551dd50cacb1924ce20b10d20d7d476bea3b5" HandleID="k8s-pod-network.87a5045fe091cdb0c1b9f1c4342551dd50cacb1924ce20b10d20d7d476bea3b5" Workload="ci--4081.3.3--n--65a4af4639-k8s-calico--kube--controllers--c7b857f57--2t2zm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139710), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-n-65a4af4639", "pod":"calico-kube-controllers-c7b857f57-2t2zm", "timestamp":"2025-05-17 00:30:25.518044604 +0000 UTC"}, Hostname:"ci-4081.3.3-n-65a4af4639", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:30:25.748256 containerd[1808]: 2025-05-17 00:30:25.518 [INFO][6144] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:25.748256 containerd[1808]: 2025-05-17 00:30:25.667 [INFO][6144] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:25.748256 containerd[1808]: 2025-05-17 00:30:25.667 [INFO][6144] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-65a4af4639' May 17 00:30:25.748256 containerd[1808]: 2025-05-17 00:30:25.716 [INFO][6144] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.87a5045fe091cdb0c1b9f1c4342551dd50cacb1924ce20b10d20d7d476bea3b5" host="ci-4081.3.3-n-65a4af4639" May 17 00:30:25.748256 containerd[1808]: 2025-05-17 00:30:25.726 [INFO][6144] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-65a4af4639" May 17 00:30:25.748256 containerd[1808]: 2025-05-17 00:30:25.732 [INFO][6144] ipam/ipam.go 511: Trying affinity for 192.168.48.128/26 host="ci-4081.3.3-n-65a4af4639" May 17 00:30:25.748256 containerd[1808]: 2025-05-17 00:30:25.733 [INFO][6144] ipam/ipam.go 158: Attempting to load block cidr=192.168.48.128/26 host="ci-4081.3.3-n-65a4af4639" May 17 00:30:25.748256 containerd[1808]: 2025-05-17 00:30:25.735 [INFO][6144] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.48.128/26 host="ci-4081.3.3-n-65a4af4639" May 17 00:30:25.748256 containerd[1808]: 2025-05-17 00:30:25.735 [INFO][6144] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.48.128/26 handle="k8s-pod-network.87a5045fe091cdb0c1b9f1c4342551dd50cacb1924ce20b10d20d7d476bea3b5" host="ci-4081.3.3-n-65a4af4639" May 17 00:30:25.748256 containerd[1808]: 2025-05-17 00:30:25.735 [INFO][6144] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.87a5045fe091cdb0c1b9f1c4342551dd50cacb1924ce20b10d20d7d476bea3b5 May 17 00:30:25.748256 containerd[1808]: 2025-05-17 00:30:25.737 [INFO][6144] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.48.128/26 handle="k8s-pod-network.87a5045fe091cdb0c1b9f1c4342551dd50cacb1924ce20b10d20d7d476bea3b5" host="ci-4081.3.3-n-65a4af4639" May 17 00:30:25.748256 containerd[1808]: 2025-05-17 00:30:25.741 [INFO][6144] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.48.135/26] block=192.168.48.128/26 handle="k8s-pod-network.87a5045fe091cdb0c1b9f1c4342551dd50cacb1924ce20b10d20d7d476bea3b5" host="ci-4081.3.3-n-65a4af4639" May 17 00:30:25.748256 containerd[1808]: 2025-05-17 00:30:25.741 [INFO][6144] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.48.135/26] handle="k8s-pod-network.87a5045fe091cdb0c1b9f1c4342551dd50cacb1924ce20b10d20d7d476bea3b5" host="ci-4081.3.3-n-65a4af4639" May 17 00:30:25.748256 containerd[1808]: 2025-05-17 00:30:25.741 [INFO][6144] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:25.748256 containerd[1808]: 2025-05-17 00:30:25.741 [INFO][6144] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.48.135/26] IPv6=[] ContainerID="87a5045fe091cdb0c1b9f1c4342551dd50cacb1924ce20b10d20d7d476bea3b5" HandleID="k8s-pod-network.87a5045fe091cdb0c1b9f1c4342551dd50cacb1924ce20b10d20d7d476bea3b5" Workload="ci--4081.3.3--n--65a4af4639-k8s-calico--kube--controllers--c7b857f57--2t2zm-eth0" May 17 00:30:25.748664 containerd[1808]: 2025-05-17 00:30:25.742 [INFO][6064] cni-plugin/k8s.go 418: Populated endpoint ContainerID="87a5045fe091cdb0c1b9f1c4342551dd50cacb1924ce20b10d20d7d476bea3b5" Namespace="calico-system" Pod="calico-kube-controllers-c7b857f57-2t2zm" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-calico--kube--controllers--c7b857f57--2t2zm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--65a4af4639-k8s-calico--kube--controllers--c7b857f57--2t2zm-eth0", GenerateName:"calico-kube-controllers-c7b857f57-", Namespace:"calico-system", SelfLink:"", UID:"7e5ca975-ef7f-414a-942f-bcd57dc8d07a", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c7b857f57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-65a4af4639", ContainerID:"", Pod:"calico-kube-controllers-c7b857f57-2t2zm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.48.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali47e7886e210", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:30:25.748664 containerd[1808]: 2025-05-17 00:30:25.742 [INFO][6064] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.48.135/32] ContainerID="87a5045fe091cdb0c1b9f1c4342551dd50cacb1924ce20b10d20d7d476bea3b5" Namespace="calico-system" Pod="calico-kube-controllers-c7b857f57-2t2zm" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-calico--kube--controllers--c7b857f57--2t2zm-eth0" May 17 00:30:25.748664 containerd[1808]: 2025-05-17 00:30:25.742 [INFO][6064] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali47e7886e210 ContainerID="87a5045fe091cdb0c1b9f1c4342551dd50cacb1924ce20b10d20d7d476bea3b5" Namespace="calico-system" Pod="calico-kube-controllers-c7b857f57-2t2zm" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-calico--kube--controllers--c7b857f57--2t2zm-eth0" May 17 00:30:25.748664 containerd[1808]: 2025-05-17 00:30:25.743 [INFO][6064] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="87a5045fe091cdb0c1b9f1c4342551dd50cacb1924ce20b10d20d7d476bea3b5" Namespace="calico-system" Pod="calico-kube-controllers-c7b857f57-2t2zm" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-calico--kube--controllers--c7b857f57--2t2zm-eth0" May 17 00:30:25.748664 containerd[1808]: 2025-05-17 00:30:25.743 [INFO][6064] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="87a5045fe091cdb0c1b9f1c4342551dd50cacb1924ce20b10d20d7d476bea3b5" Namespace="calico-system" Pod="calico-kube-controllers-c7b857f57-2t2zm" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-calico--kube--controllers--c7b857f57--2t2zm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--65a4af4639-k8s-calico--kube--controllers--c7b857f57--2t2zm-eth0", GenerateName:"calico-kube-controllers-c7b857f57-", Namespace:"calico-system", SelfLink:"", UID:"7e5ca975-ef7f-414a-942f-bcd57dc8d07a", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c7b857f57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-65a4af4639", ContainerID:"87a5045fe091cdb0c1b9f1c4342551dd50cacb1924ce20b10d20d7d476bea3b5", Pod:"calico-kube-controllers-c7b857f57-2t2zm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.48.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali47e7886e210", MAC:"2a:27:95:d2:e2:8d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:30:25.748664 containerd[1808]: 2025-05-17 00:30:25.747 [INFO][6064] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="87a5045fe091cdb0c1b9f1c4342551dd50cacb1924ce20b10d20d7d476bea3b5" Namespace="calico-system" Pod="calico-kube-controllers-c7b857f57-2t2zm" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-calico--kube--controllers--c7b857f57--2t2zm-eth0" May 17 00:30:25.754899 containerd[1808]: time="2025-05-17T00:30:25.754878622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2nkhg,Uid:441d803b-df59-4a3e-b55c-43834c087e2b,Namespace:kube-system,Attempt:1,} returns sandbox id \"3917b3938fc7fd3f804c58f4104767769e489e73211b1f87eb775887e8ebbe44\"" May 17 00:30:25.756169 containerd[1808]: time="2025-05-17T00:30:25.756131746Z" level=info msg="CreateContainer within sandbox \"3917b3938fc7fd3f804c58f4104767769e489e73211b1f87eb775887e8ebbe44\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:30:25.757130 containerd[1808]: time="2025-05-17T00:30:25.756900210Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:30:25.757185 containerd[1808]: time="2025-05-17T00:30:25.757127879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:30:25.757185 containerd[1808]: time="2025-05-17T00:30:25.757137802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:25.757246 containerd[1808]: time="2025-05-17T00:30:25.757194533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:25.760591 containerd[1808]: time="2025-05-17T00:30:25.760542279Z" level=info msg="CreateContainer within sandbox \"3917b3938fc7fd3f804c58f4104767769e489e73211b1f87eb775887e8ebbe44\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1a5569b5ef00ce95546d2635fddd2dd2ff1261061d7b7d22875c5f8ef8ff96ab\"" May 17 00:30:25.760843 containerd[1808]: time="2025-05-17T00:30:25.760832170Z" level=info msg="StartContainer for \"1a5569b5ef00ce95546d2635fddd2dd2ff1261061d7b7d22875c5f8ef8ff96ab\"" May 17 00:30:25.770548 systemd[1]: Started cri-containerd-87a5045fe091cdb0c1b9f1c4342551dd50cacb1924ce20b10d20d7d476bea3b5.scope - libcontainer container 87a5045fe091cdb0c1b9f1c4342551dd50cacb1924ce20b10d20d7d476bea3b5. May 17 00:30:25.772196 systemd[1]: Started cri-containerd-1a5569b5ef00ce95546d2635fddd2dd2ff1261061d7b7d22875c5f8ef8ff96ab.scope - libcontainer container 1a5569b5ef00ce95546d2635fddd2dd2ff1261061d7b7d22875c5f8ef8ff96ab. May 17 00:30:25.789739 containerd[1808]: time="2025-05-17T00:30:25.789667297Z" level=info msg="StartContainer for \"1a5569b5ef00ce95546d2635fddd2dd2ff1261061d7b7d22875c5f8ef8ff96ab\" returns successfully" May 17 00:30:25.793691 containerd[1808]: time="2025-05-17T00:30:25.793666175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c7b857f57-2t2zm,Uid:7e5ca975-ef7f-414a-942f-bcd57dc8d07a,Namespace:calico-system,Attempt:1,} returns sandbox id \"87a5045fe091cdb0c1b9f1c4342551dd50cacb1924ce20b10d20d7d476bea3b5\"" May 17 00:30:25.881203 systemd-networkd[1605]: calibeca2abf310: Link UP May 17 00:30:25.881864 systemd-networkd[1605]: calibeca2abf310: Gained carrier May 17 00:30:25.908088 containerd[1808]: 2025-05-17 00:30:25.509 [INFO][6090] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--wc762-eth0 calico-apiserver-688cf69547- calico-apiserver 93f363c3-f873-4d07-a216-fd2fe6414a28 934 0 2025-05-17 00:29:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:688cf69547 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-n-65a4af4639 calico-apiserver-688cf69547-wc762 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibeca2abf310 [] [] }} ContainerID="04d70d2a6208abbfb3ae167d9ab07650e5b989fa6e5b1c3b4cb3aa76303b61bf" Namespace="calico-apiserver" Pod="calico-apiserver-688cf69547-wc762" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--wc762-" May 17 00:30:25.908088 containerd[1808]: 2025-05-17 00:30:25.509 [INFO][6090] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="04d70d2a6208abbfb3ae167d9ab07650e5b989fa6e5b1c3b4cb3aa76303b61bf" Namespace="calico-apiserver" Pod="calico-apiserver-688cf69547-wc762" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--wc762-eth0" May 17 00:30:25.908088 containerd[1808]: 2025-05-17 00:30:25.523 [INFO][6162] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="04d70d2a6208abbfb3ae167d9ab07650e5b989fa6e5b1c3b4cb3aa76303b61bf" HandleID="k8s-pod-network.04d70d2a6208abbfb3ae167d9ab07650e5b989fa6e5b1c3b4cb3aa76303b61bf" Workload="ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--wc762-eth0" May 17 00:30:25.908088 containerd[1808]: 2025-05-17 00:30:25.523 [INFO][6162] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="04d70d2a6208abbfb3ae167d9ab07650e5b989fa6e5b1c3b4cb3aa76303b61bf" HandleID="k8s-pod-network.04d70d2a6208abbfb3ae167d9ab07650e5b989fa6e5b1c3b4cb3aa76303b61bf" Workload="ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--wc762-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000254ad0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-n-65a4af4639", "pod":"calico-apiserver-688cf69547-wc762", "timestamp":"2025-05-17 00:30:25.523483577 +0000 UTC"}, Hostname:"ci-4081.3.3-n-65a4af4639", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:30:25.908088 containerd[1808]: 2025-05-17 00:30:25.523 [INFO][6162] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:25.908088 containerd[1808]: 2025-05-17 00:30:25.741 [INFO][6162] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:25.908088 containerd[1808]: 2025-05-17 00:30:25.741 [INFO][6162] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-65a4af4639' May 17 00:30:25.908088 containerd[1808]: 2025-05-17 00:30:25.818 [INFO][6162] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.04d70d2a6208abbfb3ae167d9ab07650e5b989fa6e5b1c3b4cb3aa76303b61bf" host="ci-4081.3.3-n-65a4af4639" May 17 00:30:25.908088 containerd[1808]: 2025-05-17 00:30:25.828 [INFO][6162] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-65a4af4639" May 17 00:30:25.908088 containerd[1808]: 2025-05-17 00:30:25.837 [INFO][6162] ipam/ipam.go 511: Trying affinity for 192.168.48.128/26 host="ci-4081.3.3-n-65a4af4639" May 17 00:30:25.908088 containerd[1808]: 2025-05-17 00:30:25.841 [INFO][6162] ipam/ipam.go 158: Attempting to load block cidr=192.168.48.128/26 host="ci-4081.3.3-n-65a4af4639" May 17 00:30:25.908088 containerd[1808]: 2025-05-17 00:30:25.847 [INFO][6162] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.48.128/26 host="ci-4081.3.3-n-65a4af4639" May 17 00:30:25.908088 containerd[1808]: 2025-05-17 00:30:25.847 [INFO][6162] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.48.128/26 handle="k8s-pod-network.04d70d2a6208abbfb3ae167d9ab07650e5b989fa6e5b1c3b4cb3aa76303b61bf" host="ci-4081.3.3-n-65a4af4639" May 17 00:30:25.908088 containerd[1808]: 2025-05-17 00:30:25.851 [INFO][6162] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.04d70d2a6208abbfb3ae167d9ab07650e5b989fa6e5b1c3b4cb3aa76303b61bf May 17 00:30:25.908088 containerd[1808]: 2025-05-17 00:30:25.858 [INFO][6162] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.48.128/26 handle="k8s-pod-network.04d70d2a6208abbfb3ae167d9ab07650e5b989fa6e5b1c3b4cb3aa76303b61bf" host="ci-4081.3.3-n-65a4af4639" May 17 00:30:25.908088 containerd[1808]: 2025-05-17 00:30:25.872 [INFO][6162] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.48.136/26] block=192.168.48.128/26 handle="k8s-pod-network.04d70d2a6208abbfb3ae167d9ab07650e5b989fa6e5b1c3b4cb3aa76303b61bf" host="ci-4081.3.3-n-65a4af4639" May 17 00:30:25.908088 containerd[1808]: 2025-05-17 00:30:25.872 [INFO][6162] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.48.136/26] handle="k8s-pod-network.04d70d2a6208abbfb3ae167d9ab07650e5b989fa6e5b1c3b4cb3aa76303b61bf" host="ci-4081.3.3-n-65a4af4639" May 17 00:30:25.908088 containerd[1808]: 2025-05-17 00:30:25.872 [INFO][6162] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:25.908088 containerd[1808]: 2025-05-17 00:30:25.872 [INFO][6162] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.48.136/26] IPv6=[] ContainerID="04d70d2a6208abbfb3ae167d9ab07650e5b989fa6e5b1c3b4cb3aa76303b61bf" HandleID="k8s-pod-network.04d70d2a6208abbfb3ae167d9ab07650e5b989fa6e5b1c3b4cb3aa76303b61bf" Workload="ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--wc762-eth0" May 17 00:30:25.911250 containerd[1808]: 2025-05-17 00:30:25.876 [INFO][6090] cni-plugin/k8s.go 418: Populated endpoint ContainerID="04d70d2a6208abbfb3ae167d9ab07650e5b989fa6e5b1c3b4cb3aa76303b61bf" Namespace="calico-apiserver" Pod="calico-apiserver-688cf69547-wc762" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--wc762-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--wc762-eth0", GenerateName:"calico-apiserver-688cf69547-", Namespace:"calico-apiserver", SelfLink:"", UID:"93f363c3-f873-4d07-a216-fd2fe6414a28", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 29, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"688cf69547", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-65a4af4639", ContainerID:"", Pod:"calico-apiserver-688cf69547-wc762", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.48.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibeca2abf310", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:30:25.911250 containerd[1808]: 2025-05-17 00:30:25.877 [INFO][6090] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.48.136/32] ContainerID="04d70d2a6208abbfb3ae167d9ab07650e5b989fa6e5b1c3b4cb3aa76303b61bf" Namespace="calico-apiserver" Pod="calico-apiserver-688cf69547-wc762" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--wc762-eth0" May 17 00:30:25.911250 containerd[1808]: 2025-05-17 00:30:25.877 [INFO][6090] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibeca2abf310 ContainerID="04d70d2a6208abbfb3ae167d9ab07650e5b989fa6e5b1c3b4cb3aa76303b61bf" Namespace="calico-apiserver" Pod="calico-apiserver-688cf69547-wc762" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--wc762-eth0" May 17 00:30:25.911250 containerd[1808]: 2025-05-17 00:30:25.882 [INFO][6090] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="04d70d2a6208abbfb3ae167d9ab07650e5b989fa6e5b1c3b4cb3aa76303b61bf" Namespace="calico-apiserver" Pod="calico-apiserver-688cf69547-wc762" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--wc762-eth0" May 17 00:30:25.911250 containerd[1808]: 2025-05-17 00:30:25.882 [INFO][6090] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="04d70d2a6208abbfb3ae167d9ab07650e5b989fa6e5b1c3b4cb3aa76303b61bf" Namespace="calico-apiserver" Pod="calico-apiserver-688cf69547-wc762" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--wc762-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--wc762-eth0", GenerateName:"calico-apiserver-688cf69547-", Namespace:"calico-apiserver", SelfLink:"", UID:"93f363c3-f873-4d07-a216-fd2fe6414a28", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 29, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"688cf69547", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-65a4af4639", ContainerID:"04d70d2a6208abbfb3ae167d9ab07650e5b989fa6e5b1c3b4cb3aa76303b61bf", Pod:"calico-apiserver-688cf69547-wc762", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.48.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibeca2abf310", MAC:"d2:78:9f:a4:1c:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:30:25.911250 containerd[1808]: 2025-05-17 00:30:25.902 [INFO][6090] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="04d70d2a6208abbfb3ae167d9ab07650e5b989fa6e5b1c3b4cb3aa76303b61bf" Namespace="calico-apiserver" Pod="calico-apiserver-688cf69547-wc762" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--wc762-eth0" May 17 00:30:25.922349 containerd[1808]: time="2025-05-17T00:30:25.922266246Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:30:25.922534 containerd[1808]: time="2025-05-17T00:30:25.922476313Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:30:25.922534 containerd[1808]: time="2025-05-17T00:30:25.922486242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:25.922534 containerd[1808]: time="2025-05-17T00:30:25.922525883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:25.943671 systemd[1]: Started cri-containerd-04d70d2a6208abbfb3ae167d9ab07650e5b989fa6e5b1c3b4cb3aa76303b61bf.scope - libcontainer container 04d70d2a6208abbfb3ae167d9ab07650e5b989fa6e5b1c3b4cb3aa76303b61bf. May 17 00:30:25.965676 containerd[1808]: time="2025-05-17T00:30:25.965655616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-688cf69547-wc762,Uid:93f363c3-f873-4d07-a216-fd2fe6414a28,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"04d70d2a6208abbfb3ae167d9ab07650e5b989fa6e5b1c3b4cb3aa76303b61bf\"" May 17 00:30:26.058488 systemd-networkd[1605]: vxlan.calico: Gained IPv6LL May 17 00:30:26.186728 systemd-networkd[1605]: calib62504a881c: Gained IPv6LL May 17 00:30:26.445137 systemd[1]: run-netns-cni\x2d08ba5335\x2d27cd\x2dc4b1\x2d1b47\x2dcc2d4399f733.mount: Deactivated successfully. May 17 00:30:26.547969 kubelet[3069]: I0517 00:30:26.547930 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-2nkhg" podStartSLOduration=35.54791413 podStartE2EDuration="35.54791413s" podCreationTimestamp="2025-05-17 00:29:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:30:26.542732482 +0000 UTC m=+41.190665423" watchObservedRunningTime="2025-05-17 00:30:26.54791413 +0000 UTC m=+41.195847074" May 17 00:30:26.890498 systemd-networkd[1605]: calib964d06d6eb: Gained IPv6LL May 17 00:30:27.146697 systemd-networkd[1605]: cali2b1e6a2f145: Gained IPv6LL May 17 00:30:27.722463 systemd-networkd[1605]: cali47e7886e210: Gained IPv6LL May 17 00:30:27.722685 systemd-networkd[1605]: calibeca2abf310: Gained IPv6LL May 17 00:30:27.993678 containerd[1808]: time="2025-05-17T00:30:27.993588720Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:27.993864 containerd[1808]: time="2025-05-17T00:30:27.993806504Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=47252431" May 17 00:30:27.994157 containerd[1808]: time="2025-05-17T00:30:27.994112556Z" level=info msg="ImageCreate event name:\"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:27.995262 containerd[1808]: time="2025-05-17T00:30:27.995221162Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:27.996026 containerd[1808]: time="2025-05-17T00:30:27.995985685Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 2.737736692s" May 17 00:30:27.996026 containerd[1808]: time="2025-05-17T00:30:27.996002915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 17 00:30:27.996480 containerd[1808]: time="2025-05-17T00:30:27.996465914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 17 00:30:27.996994 containerd[1808]: time="2025-05-17T00:30:27.996956431Z" level=info msg="CreateContainer within sandbox \"aed2b6305541a2a4be83b527ed9d78620c9d793e02b294348924e45ed1ee006d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:30:28.000904 containerd[1808]: time="2025-05-17T00:30:28.000862008Z" level=info msg="CreateContainer within sandbox \"aed2b6305541a2a4be83b527ed9d78620c9d793e02b294348924e45ed1ee006d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0247d1921cd2d27e6c08ea849650b7980312e95b5389f642fe39e1175a7cff6a\"" May 17 00:30:28.001080 containerd[1808]: time="2025-05-17T00:30:28.001067198Z" level=info msg="StartContainer for \"0247d1921cd2d27e6c08ea849650b7980312e95b5389f642fe39e1175a7cff6a\"" May 17 00:30:28.028538 systemd[1]: Started cri-containerd-0247d1921cd2d27e6c08ea849650b7980312e95b5389f642fe39e1175a7cff6a.scope - libcontainer container 0247d1921cd2d27e6c08ea849650b7980312e95b5389f642fe39e1175a7cff6a. May 17 00:30:28.051070 containerd[1808]: time="2025-05-17T00:30:28.051019336Z" level=info msg="StartContainer for \"0247d1921cd2d27e6c08ea849650b7980312e95b5389f642fe39e1175a7cff6a\" returns successfully" May 17 00:30:28.549448 kubelet[3069]: I0517 00:30:28.549412 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-688cf69547-z8c5c" podStartSLOduration=26.176839936 podStartE2EDuration="29.549396993s" podCreationTimestamp="2025-05-17 00:29:59 +0000 UTC" firstStartedPulling="2025-05-17 00:30:24.623813747 +0000 UTC m=+39.271746690" lastFinishedPulling="2025-05-17 00:30:27.996370808 +0000 UTC m=+42.644303747" observedRunningTime="2025-05-17 00:30:28.54935458 +0000 UTC m=+43.197287521" watchObservedRunningTime="2025-05-17 00:30:28.549396993 +0000 UTC m=+43.197329929" May 17 00:30:29.543918 kubelet[3069]: I0517 00:30:29.543892 3069 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:30:29.847658 containerd[1808]: time="2025-05-17T00:30:29.847595065Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:29.847865 containerd[1808]: time="2025-05-17T00:30:29.847737933Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=14705639" May 17 00:30:29.848173 containerd[1808]: time="2025-05-17T00:30:29.848161133Z" level=info msg="ImageCreate event name:\"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:29.849156 containerd[1808]: time="2025-05-17T00:30:29.849115881Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:29.849581 containerd[1808]: time="2025-05-17T00:30:29.849540735Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"16198294\" in 1.85305916s" May 17 00:30:29.849581 containerd[1808]: time="2025-05-17T00:30:29.849556462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 17 00:30:29.850057 containerd[1808]: time="2025-05-17T00:30:29.850008148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:30:29.850464 containerd[1808]: time="2025-05-17T00:30:29.850452320Z" level=info msg="CreateContainer within sandbox \"0c3fb46820362ecb9d7d856d6a392bf87a99903b6e236dbbb08a2b073174265f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 17 00:30:29.855311 containerd[1808]: time="2025-05-17T00:30:29.855297221Z" level=info msg="CreateContainer within sandbox \"0c3fb46820362ecb9d7d856d6a392bf87a99903b6e236dbbb08a2b073174265f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"bc7fd79226cc80e440f1993b1d68b89ac24e21d63e56e437b94f13da4c5f94f4\"" May 17 00:30:29.855574 containerd[1808]: time="2025-05-17T00:30:29.855560190Z" level=info msg="StartContainer for \"bc7fd79226cc80e440f1993b1d68b89ac24e21d63e56e437b94f13da4c5f94f4\"" May 17 00:30:29.878673 systemd[1]: Started cri-containerd-bc7fd79226cc80e440f1993b1d68b89ac24e21d63e56e437b94f13da4c5f94f4.scope - libcontainer container bc7fd79226cc80e440f1993b1d68b89ac24e21d63e56e437b94f13da4c5f94f4. May 17 00:30:29.891245 containerd[1808]: time="2025-05-17T00:30:29.891224135Z" level=info msg="StartContainer for \"bc7fd79226cc80e440f1993b1d68b89ac24e21d63e56e437b94f13da4c5f94f4\" returns successfully" May 17 00:30:30.163134 containerd[1808]: time="2025-05-17T00:30:30.162857902Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:30:30.163891 containerd[1808]: time="2025-05-17T00:30:30.163824916Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:30:30.163965 containerd[1808]: time="2025-05-17T00:30:30.163926199Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:30:30.164067 kubelet[3069]: E0517 00:30:30.164046 3069 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:30:30.164284 kubelet[3069]: E0517 00:30:30.164076 3069 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:30:30.164284 kubelet[3069]: E0517 00:30:30.164218 3069 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s6kf7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-st7xr_calico-system(79106d86-9187-44ab-a1d7-9ef14c711cf6): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:30:30.164403 containerd[1808]: time="2025-05-17T00:30:30.164300892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 17 00:30:30.165394 kubelet[3069]: E0517 00:30:30.165361 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-st7xr" podUID="79106d86-9187-44ab-a1d7-9ef14c711cf6" May 17 00:30:30.431744 kubelet[3069]: I0517 00:30:30.431537 3069 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 17 00:30:30.431744 kubelet[3069]: I0517 00:30:30.431607 3069 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 17 00:30:30.553391 kubelet[3069]: E0517 00:30:30.553283 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-st7xr" podUID="79106d86-9187-44ab-a1d7-9ef14c711cf6" May 17 00:30:30.592670 kubelet[3069]: I0517 00:30:30.592641 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-zdwc8" podStartSLOduration=23.287738881 podStartE2EDuration="29.59263102s" podCreationTimestamp="2025-05-17 00:30:01 +0000 UTC" firstStartedPulling="2025-05-17 00:30:23.545021181 +0000 UTC m=+38.192954121" lastFinishedPulling="2025-05-17 00:30:29.84991332 +0000 UTC m=+44.497846260" observedRunningTime="2025-05-17 00:30:30.592308579 +0000 UTC m=+45.240241522" watchObservedRunningTime="2025-05-17 00:30:30.59263102 +0000 UTC m=+45.240563956" May 17 00:30:32.866357 containerd[1808]: time="2025-05-17T00:30:32.866296218Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:32.866574 containerd[1808]: time="2025-05-17T00:30:32.866553310Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=51178512" May 17 00:30:32.866949 containerd[1808]: time="2025-05-17T00:30:32.866912326Z" level=info msg="ImageCreate event name:\"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:32.867921 containerd[1808]: time="2025-05-17T00:30:32.867880534Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:32.868303 containerd[1808]: time="2025-05-17T00:30:32.868267016Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"52671183\" in 2.703948632s" May 17 00:30:32.868303 containerd[1808]: time="2025-05-17T00:30:32.868283077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\"" May 17 00:30:32.868787 containerd[1808]: time="2025-05-17T00:30:32.868745748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:30:32.871782 containerd[1808]: time="2025-05-17T00:30:32.871737538Z" level=info msg="CreateContainer within sandbox \"87a5045fe091cdb0c1b9f1c4342551dd50cacb1924ce20b10d20d7d476bea3b5\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 17 00:30:32.876030 containerd[1808]: time="2025-05-17T00:30:32.875983961Z" level=info msg="CreateContainer within sandbox \"87a5045fe091cdb0c1b9f1c4342551dd50cacb1924ce20b10d20d7d476bea3b5\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"86447bcd57580022f0dd5a7603fa0a540655f96d0fccf1a98a539d57ddc6d60c\"" May 17 00:30:32.876206 containerd[1808]: time="2025-05-17T00:30:32.876194758Z" level=info msg="StartContainer for \"86447bcd57580022f0dd5a7603fa0a540655f96d0fccf1a98a539d57ddc6d60c\"" May 17 00:30:32.903698 systemd[1]: Started cri-containerd-86447bcd57580022f0dd5a7603fa0a540655f96d0fccf1a98a539d57ddc6d60c.scope - libcontainer container 86447bcd57580022f0dd5a7603fa0a540655f96d0fccf1a98a539d57ddc6d60c. May 17 00:30:32.933407 containerd[1808]: time="2025-05-17T00:30:32.933377021Z" level=info msg="StartContainer for \"86447bcd57580022f0dd5a7603fa0a540655f96d0fccf1a98a539d57ddc6d60c\" returns successfully" May 17 00:30:33.301091 containerd[1808]: time="2025-05-17T00:30:33.301038143Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:33.301233 containerd[1808]: time="2025-05-17T00:30:33.301177873Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 17 00:30:33.302557 containerd[1808]: time="2025-05-17T00:30:33.302510222Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 433.749612ms" May 17 00:30:33.302557 containerd[1808]: time="2025-05-17T00:30:33.302530972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 17 00:30:33.303990 containerd[1808]: time="2025-05-17T00:30:33.303975131Z" level=info msg="CreateContainer within sandbox \"04d70d2a6208abbfb3ae167d9ab07650e5b989fa6e5b1c3b4cb3aa76303b61bf\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:30:33.308967 containerd[1808]: time="2025-05-17T00:30:33.308953126Z" level=info msg="CreateContainer within sandbox \"04d70d2a6208abbfb3ae167d9ab07650e5b989fa6e5b1c3b4cb3aa76303b61bf\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"95fdd03f1c4da4e1f2954ea50398d1153e340838fbd4dc28bb60f54f42a87216\"" May 17 00:30:33.309217 containerd[1808]: time="2025-05-17T00:30:33.309178303Z" level=info msg="StartContainer for \"95fdd03f1c4da4e1f2954ea50398d1153e340838fbd4dc28bb60f54f42a87216\"" May 17 00:30:33.326542 systemd[1]: Started cri-containerd-95fdd03f1c4da4e1f2954ea50398d1153e340838fbd4dc28bb60f54f42a87216.scope - libcontainer container 95fdd03f1c4da4e1f2954ea50398d1153e340838fbd4dc28bb60f54f42a87216. May 17 00:30:33.352010 containerd[1808]: time="2025-05-17T00:30:33.351957428Z" level=info msg="StartContainer for \"95fdd03f1c4da4e1f2954ea50398d1153e340838fbd4dc28bb60f54f42a87216\" returns successfully" May 17 00:30:33.564989 kubelet[3069]: I0517 00:30:33.564905 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-c7b857f57-2t2zm" podStartSLOduration=25.490491856 podStartE2EDuration="32.564893724s" podCreationTimestamp="2025-05-17 00:30:01 +0000 UTC" firstStartedPulling="2025-05-17 00:30:25.794286823 +0000 UTC m=+40.442219762" lastFinishedPulling="2025-05-17 00:30:32.868688691 +0000 UTC m=+47.516621630" observedRunningTime="2025-05-17 00:30:33.564496911 +0000 UTC m=+48.212429851" watchObservedRunningTime="2025-05-17 00:30:33.564893724 +0000 UTC m=+48.212826660" May 17 00:30:33.570286 kubelet[3069]: I0517 00:30:33.570243 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-688cf69547-wc762" podStartSLOduration=27.233331609 podStartE2EDuration="34.57022509s" podCreationTimestamp="2025-05-17 00:29:59 +0000 UTC" firstStartedPulling="2025-05-17 00:30:25.966210847 +0000 UTC m=+40.614143786" lastFinishedPulling="2025-05-17 00:30:33.303104328 +0000 UTC m=+47.951037267" observedRunningTime="2025-05-17 00:30:33.570083757 +0000 UTC m=+48.218016698" watchObservedRunningTime="2025-05-17 00:30:33.57022509 +0000 UTC m=+48.218158031" May 17 00:30:35.396925 containerd[1808]: time="2025-05-17T00:30:35.396790134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:30:35.708239 containerd[1808]: time="2025-05-17T00:30:35.707968622Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:30:35.708917 containerd[1808]: time="2025-05-17T00:30:35.708841058Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:30:35.709004 containerd[1808]: time="2025-05-17T00:30:35.708916239Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:30:35.709068 kubelet[3069]: E0517 00:30:35.709042 3069 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:30:35.709268 kubelet[3069]: E0517 00:30:35.709077 3069 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:30:35.709268 kubelet[3069]: E0517 00:30:35.709144 3069 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e162bdd4080846acbc669a202524e139,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hzcj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66d7ff6d95-qgk6n_calico-system(de7e5665-5768-4a36-925b-d749b053cd37): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:30:35.711302 containerd[1808]: time="2025-05-17T00:30:35.711250038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:30:36.017055 containerd[1808]: time="2025-05-17T00:30:36.016951341Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:30:36.018026 containerd[1808]: time="2025-05-17T00:30:36.018005747Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:30:36.018097 containerd[1808]: time="2025-05-17T00:30:36.018080290Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:30:36.018190 kubelet[3069]: E0517 00:30:36.018168 3069 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:30:36.018238 kubelet[3069]: E0517 00:30:36.018200 3069 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:30:36.018292 kubelet[3069]: E0517 00:30:36.018269 3069 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hzcj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66d7ff6d95-qgk6n_calico-system(de7e5665-5768-4a36-925b-d749b053cd37): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:30:36.019591 kubelet[3069]: E0517 00:30:36.019520 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-66d7ff6d95-qgk6n" podUID="de7e5665-5768-4a36-925b-d749b053cd37" May 17 00:30:45.390505 containerd[1808]: time="2025-05-17T00:30:45.390415841Z" level=info msg="StopPodSandbox for \"746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74\"" May 17 00:30:45.393421 containerd[1808]: time="2025-05-17T00:30:45.393394200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:30:45.427544 containerd[1808]: 2025-05-17 00:30:45.409 [WARNING][6755] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--65a4af4639-k8s-csi--node--driver--zdwc8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8e114f42-fc56-4b66-8d81-a37f65ab357c", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-65a4af4639", ContainerID:"0c3fb46820362ecb9d7d856d6a392bf87a99903b6e236dbbb08a2b073174265f", Pod:"csi-node-driver-zdwc8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.48.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califef44539932", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:30:45.427544 containerd[1808]: 2025-05-17 00:30:45.409 [INFO][6755] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74" May 17 00:30:45.427544 containerd[1808]: 2025-05-17 00:30:45.409 [INFO][6755] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74" iface="eth0" netns="" May 17 00:30:45.427544 containerd[1808]: 2025-05-17 00:30:45.409 [INFO][6755] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74" May 17 00:30:45.427544 containerd[1808]: 2025-05-17 00:30:45.409 [INFO][6755] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74" May 17 00:30:45.427544 containerd[1808]: 2025-05-17 00:30:45.420 [INFO][6775] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74" HandleID="k8s-pod-network.746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74" Workload="ci--4081.3.3--n--65a4af4639-k8s-csi--node--driver--zdwc8-eth0" May 17 00:30:45.427544 containerd[1808]: 2025-05-17 00:30:45.420 [INFO][6775] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:45.427544 containerd[1808]: 2025-05-17 00:30:45.420 [INFO][6775] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:45.427544 containerd[1808]: 2025-05-17 00:30:45.424 [WARNING][6775] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74" HandleID="k8s-pod-network.746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74" Workload="ci--4081.3.3--n--65a4af4639-k8s-csi--node--driver--zdwc8-eth0" May 17 00:30:45.427544 containerd[1808]: 2025-05-17 00:30:45.424 [INFO][6775] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74" HandleID="k8s-pod-network.746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74" Workload="ci--4081.3.3--n--65a4af4639-k8s-csi--node--driver--zdwc8-eth0" May 17 00:30:45.427544 containerd[1808]: 2025-05-17 00:30:45.425 [INFO][6775] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:45.427544 containerd[1808]: 2025-05-17 00:30:45.426 [INFO][6755] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74" May 17 00:30:45.428083 containerd[1808]: time="2025-05-17T00:30:45.427565984Z" level=info msg="TearDown network for sandbox \"746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74\" successfully" May 17 00:30:45.428083 containerd[1808]: time="2025-05-17T00:30:45.427586268Z" level=info msg="StopPodSandbox for \"746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74\" returns successfully" May 17 00:30:45.428083 containerd[1808]: time="2025-05-17T00:30:45.427890020Z" level=info msg="RemovePodSandbox for \"746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74\"" May 17 00:30:45.428083 containerd[1808]: time="2025-05-17T00:30:45.427915338Z" level=info msg="Forcibly stopping sandbox \"746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74\"" May 17 00:30:45.474580 containerd[1808]: 2025-05-17 00:30:45.451 [WARNING][6798] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--65a4af4639-k8s-csi--node--driver--zdwc8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8e114f42-fc56-4b66-8d81-a37f65ab357c", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-65a4af4639", ContainerID:"0c3fb46820362ecb9d7d856d6a392bf87a99903b6e236dbbb08a2b073174265f", Pod:"csi-node-driver-zdwc8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.48.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califef44539932", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:30:45.474580 containerd[1808]: 2025-05-17 00:30:45.452 [INFO][6798] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74" May 17 00:30:45.474580 containerd[1808]: 2025-05-17 00:30:45.452 [INFO][6798] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74" iface="eth0" netns="" May 17 00:30:45.474580 containerd[1808]: 2025-05-17 00:30:45.452 [INFO][6798] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74" May 17 00:30:45.474580 containerd[1808]: 2025-05-17 00:30:45.452 [INFO][6798] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74" May 17 00:30:45.474580 containerd[1808]: 2025-05-17 00:30:45.465 [INFO][6815] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74" HandleID="k8s-pod-network.746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74" Workload="ci--4081.3.3--n--65a4af4639-k8s-csi--node--driver--zdwc8-eth0" May 17 00:30:45.474580 containerd[1808]: 2025-05-17 00:30:45.465 [INFO][6815] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:45.474580 containerd[1808]: 2025-05-17 00:30:45.465 [INFO][6815] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:45.474580 containerd[1808]: 2025-05-17 00:30:45.471 [WARNING][6815] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74" HandleID="k8s-pod-network.746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74" Workload="ci--4081.3.3--n--65a4af4639-k8s-csi--node--driver--zdwc8-eth0" May 17 00:30:45.474580 containerd[1808]: 2025-05-17 00:30:45.471 [INFO][6815] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74" HandleID="k8s-pod-network.746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74" Workload="ci--4081.3.3--n--65a4af4639-k8s-csi--node--driver--zdwc8-eth0" May 17 00:30:45.474580 containerd[1808]: 2025-05-17 00:30:45.472 [INFO][6815] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:45.474580 containerd[1808]: 2025-05-17 00:30:45.473 [INFO][6798] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74" May 17 00:30:45.475266 containerd[1808]: time="2025-05-17T00:30:45.474610861Z" level=info msg="TearDown network for sandbox \"746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74\" successfully" May 17 00:30:45.477157 containerd[1808]: time="2025-05-17T00:30:45.477141714Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:30:45.477204 containerd[1808]: time="2025-05-17T00:30:45.477183445Z" level=info msg="RemovePodSandbox \"746b3cea758b2523cc69b1590cf5fb5f15f1c2bf60463812703d0e1fd546ee74\" returns successfully" May 17 00:30:45.477757 containerd[1808]: time="2025-05-17T00:30:45.477741601Z" level=info msg="StopPodSandbox for \"3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae\"" May 17 00:30:45.510668 containerd[1808]: 2025-05-17 00:30:45.494 [WARNING][6841] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--kbg69-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"83bc9904-f910-4661-b26c-3bab2e3ff098", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 29, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-65a4af4639", ContainerID:"51ec4b3e7329320d9797e2b6536387196f96bc23a93a436b2bf11fa7e9a81c53", Pod:"coredns-7c65d6cfc9-kbg69", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.48.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali23226e6534b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:30:45.510668 containerd[1808]: 2025-05-17 00:30:45.494 [INFO][6841] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae" May 17 00:30:45.510668 containerd[1808]: 2025-05-17 00:30:45.494 [INFO][6841] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae" iface="eth0" netns="" May 17 00:30:45.510668 containerd[1808]: 2025-05-17 00:30:45.494 [INFO][6841] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae" May 17 00:30:45.510668 containerd[1808]: 2025-05-17 00:30:45.494 [INFO][6841] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae" May 17 00:30:45.510668 containerd[1808]: 2025-05-17 00:30:45.504 [INFO][6858] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae" HandleID="k8s-pod-network.3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae" Workload="ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--kbg69-eth0" May 17 00:30:45.510668 containerd[1808]: 2025-05-17 00:30:45.504 [INFO][6858] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:45.510668 containerd[1808]: 2025-05-17 00:30:45.504 [INFO][6858] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:45.510668 containerd[1808]: 2025-05-17 00:30:45.508 [WARNING][6858] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae" HandleID="k8s-pod-network.3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae" Workload="ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--kbg69-eth0" May 17 00:30:45.510668 containerd[1808]: 2025-05-17 00:30:45.508 [INFO][6858] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae" HandleID="k8s-pod-network.3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae" Workload="ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--kbg69-eth0" May 17 00:30:45.510668 containerd[1808]: 2025-05-17 00:30:45.509 [INFO][6858] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:45.510668 containerd[1808]: 2025-05-17 00:30:45.509 [INFO][6841] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae" May 17 00:30:45.510987 containerd[1808]: time="2025-05-17T00:30:45.510671877Z" level=info msg="TearDown network for sandbox \"3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae\" successfully" May 17 00:30:45.510987 containerd[1808]: time="2025-05-17T00:30:45.510688408Z" level=info msg="StopPodSandbox for \"3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae\" returns successfully" May 17 00:30:45.511027 containerd[1808]: time="2025-05-17T00:30:45.511006170Z" level=info msg="RemovePodSandbox for \"3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae\"" May 17 00:30:45.511044 containerd[1808]: time="2025-05-17T00:30:45.511027475Z" level=info msg="Forcibly stopping sandbox \"3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae\"" May 17 00:30:45.548249 containerd[1808]: 2025-05-17 00:30:45.529 [WARNING][6882] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--kbg69-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"83bc9904-f910-4661-b26c-3bab2e3ff098", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 29, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-65a4af4639", ContainerID:"51ec4b3e7329320d9797e2b6536387196f96bc23a93a436b2bf11fa7e9a81c53", Pod:"coredns-7c65d6cfc9-kbg69", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.48.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali23226e6534b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:30:45.548249 containerd[1808]: 2025-05-17 00:30:45.529 [INFO][6882] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae" May 17 00:30:45.548249 containerd[1808]: 2025-05-17 00:30:45.529 [INFO][6882] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae" iface="eth0" netns="" May 17 00:30:45.548249 containerd[1808]: 2025-05-17 00:30:45.529 [INFO][6882] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae" May 17 00:30:45.548249 containerd[1808]: 2025-05-17 00:30:45.529 [INFO][6882] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae" May 17 00:30:45.548249 containerd[1808]: 2025-05-17 00:30:45.541 [INFO][6896] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae" HandleID="k8s-pod-network.3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae" Workload="ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--kbg69-eth0" May 17 00:30:45.548249 containerd[1808]: 2025-05-17 00:30:45.541 [INFO][6896] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:45.548249 containerd[1808]: 2025-05-17 00:30:45.542 [INFO][6896] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:45.548249 containerd[1808]: 2025-05-17 00:30:45.545 [WARNING][6896] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae" HandleID="k8s-pod-network.3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae" Workload="ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--kbg69-eth0" May 17 00:30:45.548249 containerd[1808]: 2025-05-17 00:30:45.545 [INFO][6896] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae" HandleID="k8s-pod-network.3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae" Workload="ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--kbg69-eth0" May 17 00:30:45.548249 containerd[1808]: 2025-05-17 00:30:45.546 [INFO][6896] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:45.548249 containerd[1808]: 2025-05-17 00:30:45.547 [INFO][6882] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae" May 17 00:30:45.548565 containerd[1808]: time="2025-05-17T00:30:45.548272222Z" level=info msg="TearDown network for sandbox \"3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae\" successfully" May 17 00:30:45.549654 containerd[1808]: time="2025-05-17T00:30:45.549641122Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:30:45.549690 containerd[1808]: time="2025-05-17T00:30:45.549668116Z" level=info msg="RemovePodSandbox \"3c255fc875b975075b95cc2cdd12d0a1b19bf28160220568920ac827940105ae\" returns successfully" May 17 00:30:45.549872 containerd[1808]: time="2025-05-17T00:30:45.549861589Z" level=info msg="StopPodSandbox for \"90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb\"" May 17 00:30:45.584058 containerd[1808]: 2025-05-17 00:30:45.567 [WARNING][6922] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--z8c5c-eth0", GenerateName:"calico-apiserver-688cf69547-", Namespace:"calico-apiserver", SelfLink:"", UID:"47a8b75a-0ae5-4d27-9954-209696bc0aa7", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 29, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"688cf69547", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-65a4af4639", ContainerID:"aed2b6305541a2a4be83b527ed9d78620c9d793e02b294348924e45ed1ee006d", Pod:"calico-apiserver-688cf69547-z8c5c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.48.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib62504a881c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:30:45.584058 containerd[1808]: 2025-05-17 00:30:45.567 [INFO][6922] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb" May 17 00:30:45.584058 containerd[1808]: 2025-05-17 00:30:45.567 [INFO][6922] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb" iface="eth0" netns="" May 17 00:30:45.584058 containerd[1808]: 2025-05-17 00:30:45.567 [INFO][6922] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb" May 17 00:30:45.584058 containerd[1808]: 2025-05-17 00:30:45.567 [INFO][6922] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb" May 17 00:30:45.584058 containerd[1808]: 2025-05-17 00:30:45.577 [INFO][6939] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb" HandleID="k8s-pod-network.90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb" Workload="ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--z8c5c-eth0" May 17 00:30:45.584058 containerd[1808]: 2025-05-17 00:30:45.577 [INFO][6939] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:45.584058 containerd[1808]: 2025-05-17 00:30:45.577 [INFO][6939] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:45.584058 containerd[1808]: 2025-05-17 00:30:45.581 [WARNING][6939] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb" HandleID="k8s-pod-network.90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb" Workload="ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--z8c5c-eth0" May 17 00:30:45.584058 containerd[1808]: 2025-05-17 00:30:45.581 [INFO][6939] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb" HandleID="k8s-pod-network.90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb" Workload="ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--z8c5c-eth0" May 17 00:30:45.584058 containerd[1808]: 2025-05-17 00:30:45.582 [INFO][6939] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:45.584058 containerd[1808]: 2025-05-17 00:30:45.583 [INFO][6922] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb" May 17 00:30:45.584355 containerd[1808]: time="2025-05-17T00:30:45.584081961Z" level=info msg="TearDown network for sandbox \"90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb\" successfully" May 17 00:30:45.584355 containerd[1808]: time="2025-05-17T00:30:45.584101032Z" level=info msg="StopPodSandbox for \"90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb\" returns successfully" May 17 00:30:45.584400 containerd[1808]: time="2025-05-17T00:30:45.584361149Z" level=info msg="RemovePodSandbox for \"90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb\"" May 17 00:30:45.584400 containerd[1808]: time="2025-05-17T00:30:45.584382528Z" level=info msg="Forcibly stopping sandbox \"90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb\"" May 17 00:30:45.619866 containerd[1808]: 2025-05-17 00:30:45.601 [WARNING][6961] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--z8c5c-eth0", GenerateName:"calico-apiserver-688cf69547-", Namespace:"calico-apiserver", SelfLink:"", UID:"47a8b75a-0ae5-4d27-9954-209696bc0aa7", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 29, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"688cf69547", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-65a4af4639", ContainerID:"aed2b6305541a2a4be83b527ed9d78620c9d793e02b294348924e45ed1ee006d", Pod:"calico-apiserver-688cf69547-z8c5c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.48.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib62504a881c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:30:45.619866 containerd[1808]: 2025-05-17 00:30:45.602 [INFO][6961] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb" May 17 00:30:45.619866 containerd[1808]: 2025-05-17 00:30:45.602 [INFO][6961] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb" iface="eth0" netns="" May 17 00:30:45.619866 containerd[1808]: 2025-05-17 00:30:45.602 [INFO][6961] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb" May 17 00:30:45.619866 containerd[1808]: 2025-05-17 00:30:45.602 [INFO][6961] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb" May 17 00:30:45.619866 containerd[1808]: 2025-05-17 00:30:45.612 [INFO][6980] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb" HandleID="k8s-pod-network.90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb" Workload="ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--z8c5c-eth0" May 17 00:30:45.619866 containerd[1808]: 2025-05-17 00:30:45.612 [INFO][6980] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:45.619866 containerd[1808]: 2025-05-17 00:30:45.612 [INFO][6980] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:45.619866 containerd[1808]: 2025-05-17 00:30:45.617 [WARNING][6980] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb" HandleID="k8s-pod-network.90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb" Workload="ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--z8c5c-eth0" May 17 00:30:45.619866 containerd[1808]: 2025-05-17 00:30:45.617 [INFO][6980] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb" HandleID="k8s-pod-network.90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb" Workload="ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--z8c5c-eth0" May 17 00:30:45.619866 containerd[1808]: 2025-05-17 00:30:45.618 [INFO][6980] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:45.619866 containerd[1808]: 2025-05-17 00:30:45.619 [INFO][6961] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb" May 17 00:30:45.619866 containerd[1808]: time="2025-05-17T00:30:45.619839800Z" level=info msg="TearDown network for sandbox \"90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb\" successfully" May 17 00:30:45.621218 containerd[1808]: time="2025-05-17T00:30:45.621177160Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:30:45.621218 containerd[1808]: time="2025-05-17T00:30:45.621204596Z" level=info msg="RemovePodSandbox \"90cd0adb952a5d113a0803614be0fc424a2a06fa2efe6efc307d4c83ef645bfb\" returns successfully" May 17 00:30:45.621515 containerd[1808]: time="2025-05-17T00:30:45.621476849Z" level=info msg="StopPodSandbox for \"930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7\"" May 17 00:30:45.656861 containerd[1808]: 2025-05-17 00:30:45.639 [WARNING][7006] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--65a4af4639-k8s-calico--kube--controllers--c7b857f57--2t2zm-eth0", GenerateName:"calico-kube-controllers-c7b857f57-", Namespace:"calico-system", SelfLink:"", UID:"7e5ca975-ef7f-414a-942f-bcd57dc8d07a", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c7b857f57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-65a4af4639", ContainerID:"87a5045fe091cdb0c1b9f1c4342551dd50cacb1924ce20b10d20d7d476bea3b5", Pod:"calico-kube-controllers-c7b857f57-2t2zm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.48.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali47e7886e210", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:30:45.656861 containerd[1808]: 2025-05-17 00:30:45.639 [INFO][7006] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7" May 17 00:30:45.656861 containerd[1808]: 2025-05-17 00:30:45.639 [INFO][7006] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7" iface="eth0" netns="" May 17 00:30:45.656861 containerd[1808]: 2025-05-17 00:30:45.639 [INFO][7006] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7" May 17 00:30:45.656861 containerd[1808]: 2025-05-17 00:30:45.639 [INFO][7006] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7" May 17 00:30:45.656861 containerd[1808]: 2025-05-17 00:30:45.649 [INFO][7022] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7" HandleID="k8s-pod-network.930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7" Workload="ci--4081.3.3--n--65a4af4639-k8s-calico--kube--controllers--c7b857f57--2t2zm-eth0" May 17 00:30:45.656861 containerd[1808]: 2025-05-17 00:30:45.649 [INFO][7022] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:45.656861 containerd[1808]: 2025-05-17 00:30:45.649 [INFO][7022] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:45.656861 containerd[1808]: 2025-05-17 00:30:45.654 [WARNING][7022] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7" HandleID="k8s-pod-network.930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7" Workload="ci--4081.3.3--n--65a4af4639-k8s-calico--kube--controllers--c7b857f57--2t2zm-eth0" May 17 00:30:45.656861 containerd[1808]: 2025-05-17 00:30:45.654 [INFO][7022] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7" HandleID="k8s-pod-network.930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7" Workload="ci--4081.3.3--n--65a4af4639-k8s-calico--kube--controllers--c7b857f57--2t2zm-eth0" May 17 00:30:45.656861 containerd[1808]: 2025-05-17 00:30:45.655 [INFO][7022] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:45.656861 containerd[1808]: 2025-05-17 00:30:45.656 [INFO][7006] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7" May 17 00:30:45.656861 containerd[1808]: time="2025-05-17T00:30:45.656844302Z" level=info msg="TearDown network for sandbox \"930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7\" successfully" May 17 00:30:45.657200 containerd[1808]: time="2025-05-17T00:30:45.656864494Z" level=info msg="StopPodSandbox for \"930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7\" returns successfully" May 17 00:30:45.657200 containerd[1808]: time="2025-05-17T00:30:45.657179696Z" level=info msg="RemovePodSandbox for \"930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7\"" May 17 00:30:45.657240 containerd[1808]: time="2025-05-17T00:30:45.657201322Z" level=info msg="Forcibly stopping sandbox \"930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7\"" May 17 00:30:45.696623 containerd[1808]: time="2025-05-17T00:30:45.696568199Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:30:45.696882 containerd[1808]: 2025-05-17 00:30:45.677 [WARNING][7047] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--65a4af4639-k8s-calico--kube--controllers--c7b857f57--2t2zm-eth0", GenerateName:"calico-kube-controllers-c7b857f57-", Namespace:"calico-system", SelfLink:"", UID:"7e5ca975-ef7f-414a-942f-bcd57dc8d07a", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c7b857f57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-65a4af4639", ContainerID:"87a5045fe091cdb0c1b9f1c4342551dd50cacb1924ce20b10d20d7d476bea3b5", Pod:"calico-kube-controllers-c7b857f57-2t2zm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.48.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali47e7886e210", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:30:45.696882 containerd[1808]: 2025-05-17 00:30:45.677 [INFO][7047] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7" May 17 00:30:45.696882 containerd[1808]: 2025-05-17 00:30:45.677 [INFO][7047] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7" iface="eth0" netns="" May 17 00:30:45.696882 containerd[1808]: 2025-05-17 00:30:45.677 [INFO][7047] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7" May 17 00:30:45.696882 containerd[1808]: 2025-05-17 00:30:45.677 [INFO][7047] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7" May 17 00:30:45.696882 containerd[1808]: 2025-05-17 00:30:45.689 [INFO][7061] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7" HandleID="k8s-pod-network.930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7" Workload="ci--4081.3.3--n--65a4af4639-k8s-calico--kube--controllers--c7b857f57--2t2zm-eth0" May 17 00:30:45.696882 containerd[1808]: 2025-05-17 00:30:45.689 [INFO][7061] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:45.696882 containerd[1808]: 2025-05-17 00:30:45.689 [INFO][7061] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:45.696882 containerd[1808]: 2025-05-17 00:30:45.693 [WARNING][7061] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7" HandleID="k8s-pod-network.930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7" Workload="ci--4081.3.3--n--65a4af4639-k8s-calico--kube--controllers--c7b857f57--2t2zm-eth0" May 17 00:30:45.696882 containerd[1808]: 2025-05-17 00:30:45.693 [INFO][7061] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7" HandleID="k8s-pod-network.930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7" Workload="ci--4081.3.3--n--65a4af4639-k8s-calico--kube--controllers--c7b857f57--2t2zm-eth0" May 17 00:30:45.696882 containerd[1808]: 2025-05-17 00:30:45.695 [INFO][7061] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:45.696882 containerd[1808]: 2025-05-17 00:30:45.696 [INFO][7047] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7" May 17 00:30:45.696882 containerd[1808]: time="2025-05-17T00:30:45.696845552Z" level=info msg="TearDown network for sandbox \"930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7\" successfully" May 17 00:30:45.697143 containerd[1808]: time="2025-05-17T00:30:45.696927156Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:30:45.697143 containerd[1808]: time="2025-05-17T00:30:45.696973723Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:30:45.697179 kubelet[3069]: E0517 00:30:45.697053 3069 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:30:45.697179 kubelet[3069]: E0517 00:30:45.697088 3069 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:30:45.697398 kubelet[3069]: E0517 00:30:45.697164 3069 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s6kf7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-st7xr_calico-system(79106d86-9187-44ab-a1d7-9ef14c711cf6): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:30:45.698182 containerd[1808]: time="2025-05-17T00:30:45.698131649Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:30:45.698182 containerd[1808]: time="2025-05-17T00:30:45.698159902Z" level=info msg="RemovePodSandbox \"930de5790a0a3903437cc79f2053ac8655589122087f7fcf8430cc6ec7d809b7\" returns successfully" May 17 00:30:45.698238 kubelet[3069]: E0517 00:30:45.698223 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-st7xr" podUID="79106d86-9187-44ab-a1d7-9ef14c711cf6" May 17 00:30:45.698356 containerd[1808]: time="2025-05-17T00:30:45.698346052Z" level=info msg="StopPodSandbox for \"17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681\"" May 17 00:30:45.731850 containerd[1808]: 2025-05-17 00:30:45.714 [WARNING][7084] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-whisker--78ff969995--tfnzx-eth0" May 17 00:30:45.731850 containerd[1808]: 2025-05-17 00:30:45.714 [INFO][7084] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681" May 17 00:30:45.731850 containerd[1808]: 2025-05-17 00:30:45.714 [INFO][7084] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681" iface="eth0" netns="" May 17 00:30:45.731850 containerd[1808]: 2025-05-17 00:30:45.714 [INFO][7084] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681" May 17 00:30:45.731850 containerd[1808]: 2025-05-17 00:30:45.714 [INFO][7084] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681" May 17 00:30:45.731850 containerd[1808]: 2025-05-17 00:30:45.725 [INFO][7103] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681" HandleID="k8s-pod-network.17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681" Workload="ci--4081.3.3--n--65a4af4639-k8s-whisker--78ff969995--tfnzx-eth0" May 17 00:30:45.731850 containerd[1808]: 2025-05-17 00:30:45.725 [INFO][7103] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:45.731850 containerd[1808]: 2025-05-17 00:30:45.725 [INFO][7103] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:45.731850 containerd[1808]: 2025-05-17 00:30:45.729 [WARNING][7103] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681" HandleID="k8s-pod-network.17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681" Workload="ci--4081.3.3--n--65a4af4639-k8s-whisker--78ff969995--tfnzx-eth0" May 17 00:30:45.731850 containerd[1808]: 2025-05-17 00:30:45.729 [INFO][7103] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681" HandleID="k8s-pod-network.17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681" Workload="ci--4081.3.3--n--65a4af4639-k8s-whisker--78ff969995--tfnzx-eth0" May 17 00:30:45.731850 containerd[1808]: 2025-05-17 00:30:45.730 [INFO][7103] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:45.731850 containerd[1808]: 2025-05-17 00:30:45.731 [INFO][7084] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681" May 17 00:30:45.732111 containerd[1808]: time="2025-05-17T00:30:45.731877409Z" level=info msg="TearDown network for sandbox \"17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681\" successfully" May 17 00:30:45.732111 containerd[1808]: time="2025-05-17T00:30:45.731892543Z" level=info msg="StopPodSandbox for \"17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681\" returns successfully" May 17 00:30:45.732212 containerd[1808]: time="2025-05-17T00:30:45.732200179Z" level=info msg="RemovePodSandbox for \"17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681\"" May 17 00:30:45.732250 containerd[1808]: time="2025-05-17T00:30:45.732217204Z" level=info msg="Forcibly stopping sandbox \"17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681\"" May 17 00:30:45.766553 containerd[1808]: 2025-05-17 00:30:45.748 [WARNING][7125] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681" WorkloadEndpoint="ci--4081.3.3--n--65a4af4639-k8s-whisker--78ff969995--tfnzx-eth0" May 17 00:30:45.766553 containerd[1808]: 2025-05-17 00:30:45.749 [INFO][7125] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681" May 17 00:30:45.766553 containerd[1808]: 2025-05-17 00:30:45.749 [INFO][7125] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681" iface="eth0" netns="" May 17 00:30:45.766553 containerd[1808]: 2025-05-17 00:30:45.749 [INFO][7125] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681" May 17 00:30:45.766553 containerd[1808]: 2025-05-17 00:30:45.749 [INFO][7125] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681" May 17 00:30:45.766553 containerd[1808]: 2025-05-17 00:30:45.759 [INFO][7139] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681" HandleID="k8s-pod-network.17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681" Workload="ci--4081.3.3--n--65a4af4639-k8s-whisker--78ff969995--tfnzx-eth0" May 17 00:30:45.766553 containerd[1808]: 2025-05-17 00:30:45.759 [INFO][7139] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:45.766553 containerd[1808]: 2025-05-17 00:30:45.759 [INFO][7139] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:45.766553 containerd[1808]: 2025-05-17 00:30:45.763 [WARNING][7139] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681" HandleID="k8s-pod-network.17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681" Workload="ci--4081.3.3--n--65a4af4639-k8s-whisker--78ff969995--tfnzx-eth0" May 17 00:30:45.766553 containerd[1808]: 2025-05-17 00:30:45.763 [INFO][7139] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681" HandleID="k8s-pod-network.17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681" Workload="ci--4081.3.3--n--65a4af4639-k8s-whisker--78ff969995--tfnzx-eth0" May 17 00:30:45.766553 containerd[1808]: 2025-05-17 00:30:45.765 [INFO][7139] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:45.766553 containerd[1808]: 2025-05-17 00:30:45.765 [INFO][7125] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681" May 17 00:30:45.766864 containerd[1808]: time="2025-05-17T00:30:45.766578218Z" level=info msg="TearDown network for sandbox \"17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681\" successfully" May 17 00:30:45.767980 containerd[1808]: time="2025-05-17T00:30:45.767967513Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:30:45.768009 containerd[1808]: time="2025-05-17T00:30:45.767994053Z" level=info msg="RemovePodSandbox \"17f07e6739870b849e36451c059a778ca0d348171e4aa15f07a83bc197cfc681\" returns successfully" May 17 00:30:45.768272 containerd[1808]: time="2025-05-17T00:30:45.768262830Z" level=info msg="StopPodSandbox for \"216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce\"" May 17 00:30:45.801167 containerd[1808]: 2025-05-17 00:30:45.785 [WARNING][7167] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--65a4af4639-k8s-goldmane--8f77d7b6c--st7xr-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"79106d86-9187-44ab-a1d7-9ef14c711cf6", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-65a4af4639", ContainerID:"b2ce52732e1d627f09b4c9dec0df316feb725403e7dcac24b5df2dc2f69e0b04", Pod:"goldmane-8f77d7b6c-st7xr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.48.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib964d06d6eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:30:45.801167 containerd[1808]: 2025-05-17 00:30:45.785 [INFO][7167] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce" May 17 00:30:45.801167 containerd[1808]: 2025-05-17 00:30:45.785 [INFO][7167] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce" iface="eth0" netns="" May 17 00:30:45.801167 containerd[1808]: 2025-05-17 00:30:45.785 [INFO][7167] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce" May 17 00:30:45.801167 containerd[1808]: 2025-05-17 00:30:45.785 [INFO][7167] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce" May 17 00:30:45.801167 containerd[1808]: 2025-05-17 00:30:45.795 [INFO][7182] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce" HandleID="k8s-pod-network.216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce" Workload="ci--4081.3.3--n--65a4af4639-k8s-goldmane--8f77d7b6c--st7xr-eth0" May 17 00:30:45.801167 containerd[1808]: 2025-05-17 00:30:45.795 [INFO][7182] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:45.801167 containerd[1808]: 2025-05-17 00:30:45.795 [INFO][7182] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:45.801167 containerd[1808]: 2025-05-17 00:30:45.798 [WARNING][7182] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce" HandleID="k8s-pod-network.216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce" Workload="ci--4081.3.3--n--65a4af4639-k8s-goldmane--8f77d7b6c--st7xr-eth0" May 17 00:30:45.801167 containerd[1808]: 2025-05-17 00:30:45.798 [INFO][7182] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce" HandleID="k8s-pod-network.216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce" Workload="ci--4081.3.3--n--65a4af4639-k8s-goldmane--8f77d7b6c--st7xr-eth0" May 17 00:30:45.801167 containerd[1808]: 2025-05-17 00:30:45.799 [INFO][7182] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:45.801167 containerd[1808]: 2025-05-17 00:30:45.800 [INFO][7167] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce" May 17 00:30:45.801485 containerd[1808]: time="2025-05-17T00:30:45.801204284Z" level=info msg="TearDown network for sandbox \"216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce\" successfully" May 17 00:30:45.801485 containerd[1808]: time="2025-05-17T00:30:45.801219912Z" level=info msg="StopPodSandbox for \"216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce\" returns successfully" May 17 00:30:45.801485 containerd[1808]: time="2025-05-17T00:30:45.801464505Z" level=info msg="RemovePodSandbox for \"216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce\"" May 17 00:30:45.801485 containerd[1808]: time="2025-05-17T00:30:45.801478998Z" level=info msg="Forcibly stopping sandbox \"216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce\"" May 17 00:30:45.835279 containerd[1808]: 2025-05-17 00:30:45.817 [WARNING][7204] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--65a4af4639-k8s-goldmane--8f77d7b6c--st7xr-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"79106d86-9187-44ab-a1d7-9ef14c711cf6", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-65a4af4639", ContainerID:"b2ce52732e1d627f09b4c9dec0df316feb725403e7dcac24b5df2dc2f69e0b04", Pod:"goldmane-8f77d7b6c-st7xr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.48.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib964d06d6eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:30:45.835279 containerd[1808]: 2025-05-17 00:30:45.817 [INFO][7204] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce" May 17 00:30:45.835279 containerd[1808]: 2025-05-17 00:30:45.817 [INFO][7204] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce" iface="eth0" netns="" May 17 00:30:45.835279 containerd[1808]: 2025-05-17 00:30:45.817 [INFO][7204] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce" May 17 00:30:45.835279 containerd[1808]: 2025-05-17 00:30:45.817 [INFO][7204] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce" May 17 00:30:45.835279 containerd[1808]: 2025-05-17 00:30:45.827 [INFO][7218] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce" HandleID="k8s-pod-network.216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce" Workload="ci--4081.3.3--n--65a4af4639-k8s-goldmane--8f77d7b6c--st7xr-eth0" May 17 00:30:45.835279 containerd[1808]: 2025-05-17 00:30:45.827 [INFO][7218] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:45.835279 containerd[1808]: 2025-05-17 00:30:45.827 [INFO][7218] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:45.835279 containerd[1808]: 2025-05-17 00:30:45.832 [WARNING][7218] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce" HandleID="k8s-pod-network.216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce" Workload="ci--4081.3.3--n--65a4af4639-k8s-goldmane--8f77d7b6c--st7xr-eth0" May 17 00:30:45.835279 containerd[1808]: 2025-05-17 00:30:45.832 [INFO][7218] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce" HandleID="k8s-pod-network.216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce" Workload="ci--4081.3.3--n--65a4af4639-k8s-goldmane--8f77d7b6c--st7xr-eth0" May 17 00:30:45.835279 containerd[1808]: 2025-05-17 00:30:45.833 [INFO][7218] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:45.835279 containerd[1808]: 2025-05-17 00:30:45.834 [INFO][7204] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce" May 17 00:30:45.835588 containerd[1808]: time="2025-05-17T00:30:45.835279638Z" level=info msg="TearDown network for sandbox \"216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce\" successfully" May 17 00:30:45.836644 containerd[1808]: time="2025-05-17T00:30:45.836602865Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:30:45.836644 containerd[1808]: time="2025-05-17T00:30:45.836630283Z" level=info msg="RemovePodSandbox \"216d45b2c6c3367a7910df25f77b91bb33c13a024b594fe417499dda1df4a7ce\" returns successfully" May 17 00:30:45.836949 containerd[1808]: time="2025-05-17T00:30:45.836894828Z" level=info msg="StopPodSandbox for \"561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2\"" May 17 00:30:45.871069 containerd[1808]: 2025-05-17 00:30:45.854 [WARNING][7244] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--2nkhg-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"441d803b-df59-4a3e-b55c-43834c087e2b", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 29, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-65a4af4639", ContainerID:"3917b3938fc7fd3f804c58f4104767769e489e73211b1f87eb775887e8ebbe44", Pod:"coredns-7c65d6cfc9-2nkhg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.48.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2b1e6a2f145", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:30:45.871069 containerd[1808]: 2025-05-17 00:30:45.854 [INFO][7244] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2" May 17 00:30:45.871069 containerd[1808]: 2025-05-17 00:30:45.854 [INFO][7244] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2" iface="eth0" netns="" May 17 00:30:45.871069 containerd[1808]: 2025-05-17 00:30:45.854 [INFO][7244] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2" May 17 00:30:45.871069 containerd[1808]: 2025-05-17 00:30:45.854 [INFO][7244] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2" May 17 00:30:45.871069 containerd[1808]: 2025-05-17 00:30:45.864 [INFO][7259] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2" HandleID="k8s-pod-network.561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2" Workload="ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--2nkhg-eth0" May 17 00:30:45.871069 containerd[1808]: 2025-05-17 00:30:45.864 [INFO][7259] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:45.871069 containerd[1808]: 2025-05-17 00:30:45.864 [INFO][7259] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:45.871069 containerd[1808]: 2025-05-17 00:30:45.868 [WARNING][7259] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2" HandleID="k8s-pod-network.561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2" Workload="ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--2nkhg-eth0" May 17 00:30:45.871069 containerd[1808]: 2025-05-17 00:30:45.868 [INFO][7259] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2" HandleID="k8s-pod-network.561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2" Workload="ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--2nkhg-eth0" May 17 00:30:45.871069 containerd[1808]: 2025-05-17 00:30:45.869 [INFO][7259] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:45.871069 containerd[1808]: 2025-05-17 00:30:45.870 [INFO][7244] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2" May 17 00:30:45.871069 containerd[1808]: time="2025-05-17T00:30:45.871067669Z" level=info msg="TearDown network for sandbox \"561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2\" successfully" May 17 00:30:45.871409 containerd[1808]: time="2025-05-17T00:30:45.871083409Z" level=info msg="StopPodSandbox for \"561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2\" returns successfully" May 17 00:30:45.871409 containerd[1808]: time="2025-05-17T00:30:45.871363916Z" level=info msg="RemovePodSandbox for \"561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2\"" May 17 00:30:45.871409 containerd[1808]: time="2025-05-17T00:30:45.871386283Z" level=info msg="Forcibly stopping sandbox \"561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2\"" May 17 00:30:45.911968 containerd[1808]: 2025-05-17 00:30:45.889 [WARNING][7281] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--2nkhg-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"441d803b-df59-4a3e-b55c-43834c087e2b", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 29, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-65a4af4639", ContainerID:"3917b3938fc7fd3f804c58f4104767769e489e73211b1f87eb775887e8ebbe44", Pod:"coredns-7c65d6cfc9-2nkhg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.48.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2b1e6a2f145", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:30:45.911968 containerd[1808]: 2025-05-17 00:30:45.889 [INFO][7281] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2" May 17 00:30:45.911968 containerd[1808]: 2025-05-17 00:30:45.889 [INFO][7281] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2" iface="eth0" netns="" May 17 00:30:45.911968 containerd[1808]: 2025-05-17 00:30:45.889 [INFO][7281] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2" May 17 00:30:45.911968 containerd[1808]: 2025-05-17 00:30:45.889 [INFO][7281] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2" May 17 00:30:45.911968 containerd[1808]: 2025-05-17 00:30:45.902 [INFO][7299] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2" HandleID="k8s-pod-network.561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2" Workload="ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--2nkhg-eth0" May 17 00:30:45.911968 containerd[1808]: 2025-05-17 00:30:45.902 [INFO][7299] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:45.911968 containerd[1808]: 2025-05-17 00:30:45.902 [INFO][7299] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:45.911968 containerd[1808]: 2025-05-17 00:30:45.908 [WARNING][7299] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2" HandleID="k8s-pod-network.561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2" Workload="ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--2nkhg-eth0" May 17 00:30:45.911968 containerd[1808]: 2025-05-17 00:30:45.908 [INFO][7299] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2" HandleID="k8s-pod-network.561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2" Workload="ci--4081.3.3--n--65a4af4639-k8s-coredns--7c65d6cfc9--2nkhg-eth0" May 17 00:30:45.911968 containerd[1808]: 2025-05-17 00:30:45.910 [INFO][7299] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:45.911968 containerd[1808]: 2025-05-17 00:30:45.911 [INFO][7281] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2" May 17 00:30:45.911968 containerd[1808]: time="2025-05-17T00:30:45.911954103Z" level=info msg="TearDown network for sandbox \"561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2\" successfully" May 17 00:30:45.913750 containerd[1808]: time="2025-05-17T00:30:45.913735800Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:30:45.913789 containerd[1808]: time="2025-05-17T00:30:45.913770294Z" level=info msg="RemovePodSandbox \"561700d2acf8dc379d799a9243b5b889153b4fde956fb74b5bde998287dde6f2\" returns successfully" May 17 00:30:45.914067 containerd[1808]: time="2025-05-17T00:30:45.914049587Z" level=info msg="StopPodSandbox for \"3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2\"" May 17 00:30:45.948655 containerd[1808]: 2025-05-17 00:30:45.931 [WARNING][7322] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--wc762-eth0", GenerateName:"calico-apiserver-688cf69547-", Namespace:"calico-apiserver", SelfLink:"", UID:"93f363c3-f873-4d07-a216-fd2fe6414a28", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 29, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"688cf69547", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-65a4af4639", ContainerID:"04d70d2a6208abbfb3ae167d9ab07650e5b989fa6e5b1c3b4cb3aa76303b61bf", Pod:"calico-apiserver-688cf69547-wc762", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.48.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibeca2abf310", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:30:45.948655 containerd[1808]: 2025-05-17 00:30:45.931 [INFO][7322] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2" May 17 00:30:45.948655 containerd[1808]: 2025-05-17 00:30:45.931 [INFO][7322] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2" iface="eth0" netns="" May 17 00:30:45.948655 containerd[1808]: 2025-05-17 00:30:45.931 [INFO][7322] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2" May 17 00:30:45.948655 containerd[1808]: 2025-05-17 00:30:45.931 [INFO][7322] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2" May 17 00:30:45.948655 containerd[1808]: 2025-05-17 00:30:45.941 [INFO][7342] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2" HandleID="k8s-pod-network.3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2" Workload="ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--wc762-eth0" May 17 00:30:45.948655 containerd[1808]: 2025-05-17 00:30:45.941 [INFO][7342] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:45.948655 containerd[1808]: 2025-05-17 00:30:45.941 [INFO][7342] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:45.948655 containerd[1808]: 2025-05-17 00:30:45.946 [WARNING][7342] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2" HandleID="k8s-pod-network.3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2" Workload="ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--wc762-eth0" May 17 00:30:45.948655 containerd[1808]: 2025-05-17 00:30:45.946 [INFO][7342] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2" HandleID="k8s-pod-network.3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2" Workload="ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--wc762-eth0" May 17 00:30:45.948655 containerd[1808]: 2025-05-17 00:30:45.947 [INFO][7342] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:45.948655 containerd[1808]: 2025-05-17 00:30:45.947 [INFO][7322] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2" May 17 00:30:45.948967 containerd[1808]: time="2025-05-17T00:30:45.948654803Z" level=info msg="TearDown network for sandbox \"3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2\" successfully" May 17 00:30:45.948967 containerd[1808]: time="2025-05-17T00:30:45.948671861Z" level=info msg="StopPodSandbox for \"3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2\" returns successfully" May 17 00:30:45.948967 containerd[1808]: time="2025-05-17T00:30:45.948949300Z" level=info msg="RemovePodSandbox for \"3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2\"" May 17 00:30:45.948967 containerd[1808]: time="2025-05-17T00:30:45.948965516Z" level=info msg="Forcibly stopping sandbox \"3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2\"" May 17 00:30:45.985107 containerd[1808]: 2025-05-17 00:30:45.967 [WARNING][7365] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--wc762-eth0", GenerateName:"calico-apiserver-688cf69547-", Namespace:"calico-apiserver", SelfLink:"", UID:"93f363c3-f873-4d07-a216-fd2fe6414a28", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 29, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"688cf69547", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-65a4af4639", ContainerID:"04d70d2a6208abbfb3ae167d9ab07650e5b989fa6e5b1c3b4cb3aa76303b61bf", Pod:"calico-apiserver-688cf69547-wc762", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.48.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibeca2abf310", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:30:45.985107 containerd[1808]: 2025-05-17 00:30:45.967 [INFO][7365] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2" May 17 00:30:45.985107 containerd[1808]: 2025-05-17 00:30:45.967 [INFO][7365] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2" iface="eth0" netns="" May 17 00:30:45.985107 containerd[1808]: 2025-05-17 00:30:45.967 [INFO][7365] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2" May 17 00:30:45.985107 containerd[1808]: 2025-05-17 00:30:45.967 [INFO][7365] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2" May 17 00:30:45.985107 containerd[1808]: 2025-05-17 00:30:45.977 [INFO][7378] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2" HandleID="k8s-pod-network.3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2" Workload="ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--wc762-eth0" May 17 00:30:45.985107 containerd[1808]: 2025-05-17 00:30:45.977 [INFO][7378] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:45.985107 containerd[1808]: 2025-05-17 00:30:45.977 [INFO][7378] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:45.985107 containerd[1808]: 2025-05-17 00:30:45.982 [WARNING][7378] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2" HandleID="k8s-pod-network.3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2" Workload="ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--wc762-eth0" May 17 00:30:45.985107 containerd[1808]: 2025-05-17 00:30:45.982 [INFO][7378] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2" HandleID="k8s-pod-network.3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2" Workload="ci--4081.3.3--n--65a4af4639-k8s-calico--apiserver--688cf69547--wc762-eth0" May 17 00:30:45.985107 containerd[1808]: 2025-05-17 00:30:45.983 [INFO][7378] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:45.985107 containerd[1808]: 2025-05-17 00:30:45.984 [INFO][7365] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2" May 17 00:30:45.985509 containerd[1808]: time="2025-05-17T00:30:45.985135640Z" level=info msg="TearDown network for sandbox \"3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2\" successfully" May 17 00:30:45.986684 containerd[1808]: time="2025-05-17T00:30:45.986642812Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:30:45.986684 containerd[1808]: time="2025-05-17T00:30:45.986671757Z" level=info msg="RemovePodSandbox \"3d011f4b5344b45cb6a749220a2ba6708ba233c9a851bcad654f2cb317f7c2b2\" returns successfully" May 17 00:30:47.395858 kubelet[3069]: E0517 00:30:47.395776 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-66d7ff6d95-qgk6n" podUID="de7e5665-5768-4a36-925b-d749b053cd37" May 17 00:30:59.192657 kubelet[3069]: I0517 00:30:59.192570 3069 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:31:00.395339 kubelet[3069]: E0517 00:31:00.395203 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-st7xr" podUID="79106d86-9187-44ab-a1d7-9ef14c711cf6" May 17 00:31:02.395582 containerd[1808]: time="2025-05-17T00:31:02.395428965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:31:02.706489 containerd[1808]: time="2025-05-17T00:31:02.706213265Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:31:02.707031 containerd[1808]: time="2025-05-17T00:31:02.707011217Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:31:02.707103 containerd[1808]: time="2025-05-17T00:31:02.707084834Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:31:02.707212 kubelet[3069]: E0517 00:31:02.707158 3069 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:31:02.707212 kubelet[3069]: E0517 00:31:02.707189 3069 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:31:02.707444 kubelet[3069]: E0517 00:31:02.707249 3069 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e162bdd4080846acbc669a202524e139,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hzcj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66d7ff6d95-qgk6n_calico-system(de7e5665-5768-4a36-925b-d749b053cd37): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:31:02.708965 containerd[1808]: time="2025-05-17T00:31:02.708920705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:31:03.018821 containerd[1808]: time="2025-05-17T00:31:03.018704753Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:31:03.019450 containerd[1808]: time="2025-05-17T00:31:03.019430428Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:31:03.019526 containerd[1808]: time="2025-05-17T00:31:03.019495295Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:31:03.019648 kubelet[3069]: E0517 00:31:03.019616 3069 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:31:03.019714 kubelet[3069]: E0517 00:31:03.019659 3069 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:31:03.019784 kubelet[3069]: E0517 00:31:03.019753 3069 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hzcj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66d7ff6d95-qgk6n_calico-system(de7e5665-5768-4a36-925b-d749b053cd37): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:31:03.020946 kubelet[3069]: E0517 00:31:03.020894 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-66d7ff6d95-qgk6n" podUID="de7e5665-5768-4a36-925b-d749b053cd37" May 17 00:31:14.394098 containerd[1808]: time="2025-05-17T00:31:14.394056499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:31:14.687495 containerd[1808]: time="2025-05-17T00:31:14.687424588Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:31:14.687768 containerd[1808]: time="2025-05-17T00:31:14.687755839Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:31:14.687848 containerd[1808]: time="2025-05-17T00:31:14.687832738Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:31:14.687979 kubelet[3069]: E0517 00:31:14.687940 3069 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:31:14.688215 kubelet[3069]: E0517 00:31:14.687989 3069 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:31:14.688215 kubelet[3069]: E0517 00:31:14.688082 3069 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s6kf7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-st7xr_calico-system(79106d86-9187-44ab-a1d7-9ef14c711cf6): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:31:14.689241 kubelet[3069]: E0517 00:31:14.689226 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-st7xr" podUID="79106d86-9187-44ab-a1d7-9ef14c711cf6" May 17 00:31:15.397894 kubelet[3069]: E0517 00:31:15.397791 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-66d7ff6d95-qgk6n" podUID="de7e5665-5768-4a36-925b-d749b053cd37" May 17 00:31:26.394708 kubelet[3069]: E0517 00:31:26.394575 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-st7xr" podUID="79106d86-9187-44ab-a1d7-9ef14c711cf6" May 17 00:31:26.396268 kubelet[3069]: E0517 00:31:26.396145 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-66d7ff6d95-qgk6n" podUID="de7e5665-5768-4a36-925b-d749b053cd37" May 17 00:31:36.291230 systemd[1]: Started sshd@9-147.75.203.231:22-218.92.0.209:7112.service - OpenSSH per-connection server daemon (218.92.0.209:7112). May 17 00:31:36.453174 sshd[7515]: Unable to negotiate with 218.92.0.209 port 7112: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1 [preauth] May 17 00:31:36.455313 systemd[1]: sshd@9-147.75.203.231:22-218.92.0.209:7112.service: Deactivated successfully. May 17 00:31:39.396148 kubelet[3069]: E0517 00:31:39.396029 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-66d7ff6d95-qgk6n" podUID="de7e5665-5768-4a36-925b-d749b053cd37" May 17 00:31:41.394765 kubelet[3069]: E0517 00:31:41.394680 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-st7xr" podUID="79106d86-9187-44ab-a1d7-9ef14c711cf6" May 17 00:31:53.396082 containerd[1808]: time="2025-05-17T00:31:53.395990904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:31:53.702889 containerd[1808]: time="2025-05-17T00:31:53.702617183Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:31:53.703435 containerd[1808]: time="2025-05-17T00:31:53.703409993Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:31:53.703521 containerd[1808]: time="2025-05-17T00:31:53.703494575Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:31:53.703636 kubelet[3069]: E0517 00:31:53.703612 3069 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:31:53.703830 kubelet[3069]: E0517 00:31:53.703646 3069 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:31:53.703830 kubelet[3069]: E0517 00:31:53.703710 3069 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e162bdd4080846acbc669a202524e139,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hzcj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66d7ff6d95-qgk6n_calico-system(de7e5665-5768-4a36-925b-d749b053cd37): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:31:53.705618 containerd[1808]: time="2025-05-17T00:31:53.705551226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:31:54.021300 containerd[1808]: time="2025-05-17T00:31:54.021262112Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:31:54.022004 containerd[1808]: time="2025-05-17T00:31:54.021909497Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:31:54.022101 containerd[1808]: time="2025-05-17T00:31:54.022015493Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:31:54.022170 kubelet[3069]: E0517 00:31:54.022115 3069 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:31:54.022170 kubelet[3069]: E0517 00:31:54.022159 3069 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:31:54.022267 kubelet[3069]: E0517 00:31:54.022244 3069 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hzcj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66d7ff6d95-qgk6n_calico-system(de7e5665-5768-4a36-925b-d749b053cd37): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:31:54.023487 kubelet[3069]: E0517 00:31:54.023433 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-66d7ff6d95-qgk6n" podUID="de7e5665-5768-4a36-925b-d749b053cd37" May 17 00:31:54.395525 kubelet[3069]: E0517 00:31:54.395301 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-st7xr" podUID="79106d86-9187-44ab-a1d7-9ef14c711cf6" May 17 00:32:05.393681 containerd[1808]: time="2025-05-17T00:32:05.393656981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:32:05.727244 containerd[1808]: time="2025-05-17T00:32:05.727138301Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:32:05.728210 containerd[1808]: time="2025-05-17T00:32:05.728185669Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:32:05.728306 containerd[1808]: time="2025-05-17T00:32:05.728269379Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:32:05.728534 kubelet[3069]: E0517 00:32:05.728480 3069 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:32:05.728534 kubelet[3069]: E0517 00:32:05.728513 3069 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:32:05.728873 kubelet[3069]: E0517 00:32:05.728624 3069 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s6kf7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-st7xr_calico-system(79106d86-9187-44ab-a1d7-9ef14c711cf6): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:32:05.729905 kubelet[3069]: E0517 00:32:05.729834 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-st7xr" podUID="79106d86-9187-44ab-a1d7-9ef14c711cf6" May 17 00:32:08.395684 kubelet[3069]: E0517 00:32:08.395581 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-66d7ff6d95-qgk6n" podUID="de7e5665-5768-4a36-925b-d749b053cd37" May 17 00:32:16.395195 kubelet[3069]: E0517 00:32:16.394957 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-st7xr" podUID="79106d86-9187-44ab-a1d7-9ef14c711cf6" May 17 00:32:23.395672 kubelet[3069]: E0517 00:32:23.395519 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-66d7ff6d95-qgk6n" podUID="de7e5665-5768-4a36-925b-d749b053cd37" May 17 00:32:30.395481 kubelet[3069]: E0517 00:32:30.395340 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-st7xr" podUID="79106d86-9187-44ab-a1d7-9ef14c711cf6" May 17 00:32:36.396222 kubelet[3069]: E0517 00:32:36.396095 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-66d7ff6d95-qgk6n" podUID="de7e5665-5768-4a36-925b-d749b053cd37" May 17 00:32:45.398016 kubelet[3069]: E0517 00:32:45.397924 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-st7xr" podUID="79106d86-9187-44ab-a1d7-9ef14c711cf6" May 17 00:32:47.396028 kubelet[3069]: E0517 00:32:47.395913 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-66d7ff6d95-qgk6n" podUID="de7e5665-5768-4a36-925b-d749b053cd37" May 17 00:33:00.395237 kubelet[3069]: E0517 00:33:00.395143 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-st7xr" podUID="79106d86-9187-44ab-a1d7-9ef14c711cf6" May 17 00:33:00.396294 kubelet[3069]: E0517 00:33:00.396172 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-66d7ff6d95-qgk6n" podUID="de7e5665-5768-4a36-925b-d749b053cd37" May 17 00:33:14.395596 containerd[1808]: time="2025-05-17T00:33:14.395468415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:33:14.706262 containerd[1808]: time="2025-05-17T00:33:14.705985511Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:33:14.707094 containerd[1808]: time="2025-05-17T00:33:14.707013843Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:33:14.707139 containerd[1808]: time="2025-05-17T00:33:14.707087898Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:33:14.707250 kubelet[3069]: E0517 00:33:14.707203 3069 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:33:14.707250 kubelet[3069]: E0517 00:33:14.707234 3069 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:33:14.707521 kubelet[3069]: E0517 00:33:14.707296 3069 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e162bdd4080846acbc669a202524e139,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hzcj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66d7ff6d95-qgk6n_calico-system(de7e5665-5768-4a36-925b-d749b053cd37): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:33:14.709023 containerd[1808]: time="2025-05-17T00:33:14.708972110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:33:15.030890 containerd[1808]: time="2025-05-17T00:33:15.030778747Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:33:15.031540 containerd[1808]: time="2025-05-17T00:33:15.031495478Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:33:15.031540 containerd[1808]: time="2025-05-17T00:33:15.031532401Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:33:15.031643 kubelet[3069]: E0517 00:33:15.031626 3069 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:33:15.031696 kubelet[3069]: E0517 00:33:15.031650 3069 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:33:15.031725 kubelet[3069]: E0517 00:33:15.031708 3069 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hzcj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66d7ff6d95-qgk6n_calico-system(de7e5665-5768-4a36-925b-d749b053cd37): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:33:15.032860 kubelet[3069]: E0517 00:33:15.032841 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-66d7ff6d95-qgk6n" podUID="de7e5665-5768-4a36-925b-d749b053cd37" May 17 00:33:15.396179 kubelet[3069]: E0517 00:33:15.395941 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-st7xr" podUID="79106d86-9187-44ab-a1d7-9ef14c711cf6" May 17 00:33:26.395549 containerd[1808]: time="2025-05-17T00:33:26.395406013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:33:26.396275 kubelet[3069]: E0517 00:33:26.395957 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-66d7ff6d95-qgk6n" podUID="de7e5665-5768-4a36-925b-d749b053cd37" May 17 00:33:26.700519 containerd[1808]: time="2025-05-17T00:33:26.700263978Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:33:26.701323 containerd[1808]: time="2025-05-17T00:33:26.701304239Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:33:26.701441 containerd[1808]: time="2025-05-17T00:33:26.701400406Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:33:26.701580 kubelet[3069]: E0517 00:33:26.701526 3069 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:33:26.701580 kubelet[3069]: E0517 00:33:26.701558 3069 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:33:26.701688 kubelet[3069]: E0517 00:33:26.701633 3069 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s6kf7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-st7xr_calico-system(79106d86-9187-44ab-a1d7-9ef14c711cf6): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:33:26.702804 kubelet[3069]: E0517 00:33:26.702790 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-st7xr" podUID="79106d86-9187-44ab-a1d7-9ef14c711cf6" May 17 00:33:38.395074 kubelet[3069]: E0517 00:33:38.394977 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-st7xr" podUID="79106d86-9187-44ab-a1d7-9ef14c711cf6" May 17 00:33:41.396405 kubelet[3069]: E0517 00:33:41.396183 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-66d7ff6d95-qgk6n" podUID="de7e5665-5768-4a36-925b-d749b053cd37" May 17 00:33:53.395489 kubelet[3069]: E0517 00:33:53.395391 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-st7xr" podUID="79106d86-9187-44ab-a1d7-9ef14c711cf6" May 17 00:33:54.395586 kubelet[3069]: E0517 00:33:54.395492 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-66d7ff6d95-qgk6n" podUID="de7e5665-5768-4a36-925b-d749b053cd37" May 17 00:34:04.394272 kubelet[3069]: E0517 00:34:04.394172 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-st7xr" podUID="79106d86-9187-44ab-a1d7-9ef14c711cf6" May 17 00:34:06.396005 kubelet[3069]: E0517 00:34:06.395902 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-66d7ff6d95-qgk6n" podUID="de7e5665-5768-4a36-925b-d749b053cd37" May 17 00:34:16.394714 kubelet[3069]: E0517 00:34:16.394625 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-st7xr" podUID="79106d86-9187-44ab-a1d7-9ef14c711cf6" May 17 00:34:17.396479 kubelet[3069]: E0517 00:34:17.396342 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-66d7ff6d95-qgk6n" podUID="de7e5665-5768-4a36-925b-d749b053cd37" May 17 00:34:27.394399 kubelet[3069]: E0517 00:34:27.394265 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-st7xr" podUID="79106d86-9187-44ab-a1d7-9ef14c711cf6" May 17 00:34:30.395404 kubelet[3069]: E0517 00:34:30.395255 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-66d7ff6d95-qgk6n" podUID="de7e5665-5768-4a36-925b-d749b053cd37" May 17 00:34:42.393374 kubelet[3069]: E0517 00:34:42.393321 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-st7xr" podUID="79106d86-9187-44ab-a1d7-9ef14c711cf6" May 17 00:34:42.393815 kubelet[3069]: E0517 00:34:42.393779 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-66d7ff6d95-qgk6n" podUID="de7e5665-5768-4a36-925b-d749b053cd37" May 17 00:34:54.396184 kubelet[3069]: E0517 00:34:54.396058 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-66d7ff6d95-qgk6n" podUID="de7e5665-5768-4a36-925b-d749b053cd37" May 17 00:34:55.396412 kubelet[3069]: E0517 00:34:55.396261 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-st7xr" podUID="79106d86-9187-44ab-a1d7-9ef14c711cf6" May 17 00:35:05.393159 kubelet[3069]: E0517 00:35:05.393136 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-66d7ff6d95-qgk6n" podUID="de7e5665-5768-4a36-925b-d749b053cd37" May 17 00:35:10.395152 kubelet[3069]: E0517 00:35:10.395057 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-st7xr" podUID="79106d86-9187-44ab-a1d7-9ef14c711cf6" May 17 00:35:19.396227 kubelet[3069]: E0517 00:35:19.396033 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-66d7ff6d95-qgk6n" podUID="de7e5665-5768-4a36-925b-d749b053cd37" May 17 00:35:25.393890 kubelet[3069]: E0517 00:35:25.393823 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-st7xr" podUID="79106d86-9187-44ab-a1d7-9ef14c711cf6" May 17 00:35:30.678306 systemd[1]: Started sshd@10-147.75.203.231:22-218.92.0.212:42264.service - OpenSSH per-connection server daemon (218.92.0.212:42264). May 17 00:35:30.847881 sshd[8132]: Unable to negotiate with 218.92.0.212 port 42264: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1 [preauth] May 17 00:35:30.850050 systemd[1]: sshd@10-147.75.203.231:22-218.92.0.212:42264.service: Deactivated successfully. May 17 00:35:34.395694 kubelet[3069]: E0517 00:35:34.395585 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-66d7ff6d95-qgk6n" podUID="de7e5665-5768-4a36-925b-d749b053cd37" May 17 00:35:36.394410 kubelet[3069]: E0517 00:35:36.394317 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-st7xr" podUID="79106d86-9187-44ab-a1d7-9ef14c711cf6" May 17 00:35:46.396559 kubelet[3069]: E0517 00:35:46.396421 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-66d7ff6d95-qgk6n" podUID="de7e5665-5768-4a36-925b-d749b053cd37" May 17 00:35:48.395136 kubelet[3069]: E0517 00:35:48.394988 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-st7xr" podUID="79106d86-9187-44ab-a1d7-9ef14c711cf6" May 17 00:35:57.269611 systemd[1]: Started sshd@11-147.75.203.231:22-218.92.0.212:43222.service - OpenSSH per-connection server daemon (218.92.0.212:43222). May 17 00:35:57.420741 sshd[8201]: Unable to negotiate with 218.92.0.212 port 43222: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1 [preauth] May 17 00:35:57.422862 systemd[1]: sshd@11-147.75.203.231:22-218.92.0.212:43222.service: Deactivated successfully. May 17 00:35:59.395407 kubelet[3069]: E0517 00:35:59.395302 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-st7xr" podUID="79106d86-9187-44ab-a1d7-9ef14c711cf6" May 17 00:35:59.396443 containerd[1808]: time="2025-05-17T00:35:59.396196949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:35:59.714454 containerd[1808]: time="2025-05-17T00:35:59.714147950Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:35:59.732299 containerd[1808]: time="2025-05-17T00:35:59.732134907Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:35:59.732299 containerd[1808]: time="2025-05-17T00:35:59.732218097Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:35:59.732867 kubelet[3069]: E0517 00:35:59.732729 3069 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:35:59.732867 kubelet[3069]: E0517 00:35:59.732841 3069 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:35:59.733330 kubelet[3069]: E0517 00:35:59.733086 3069 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e162bdd4080846acbc669a202524e139,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hzcj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66d7ff6d95-qgk6n_calico-system(de7e5665-5768-4a36-925b-d749b053cd37): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:35:59.736072 containerd[1808]: time="2025-05-17T00:35:59.735999398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:36:00.053194 containerd[1808]: time="2025-05-17T00:36:00.053058818Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:36:00.053962 containerd[1808]: time="2025-05-17T00:36:00.053862840Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:36:00.054046 containerd[1808]: time="2025-05-17T00:36:00.053935010Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:36:00.054200 kubelet[3069]: E0517 00:36:00.054121 3069 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:36:00.054200 kubelet[3069]: E0517 00:36:00.054175 3069 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:36:00.054312 kubelet[3069]: E0517 00:36:00.054235 3069 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hzcj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66d7ff6d95-qgk6n_calico-system(de7e5665-5768-4a36-925b-d749b053cd37): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:36:00.055495 kubelet[3069]: E0517 00:36:00.055460 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-66d7ff6d95-qgk6n" podUID="de7e5665-5768-4a36-925b-d749b053cd37" May 17 00:36:04.209181 systemd[1]: Started sshd@12-147.75.203.231:22-147.75.109.163:45440.service - OpenSSH per-connection server daemon (147.75.109.163:45440). May 17 00:36:04.268760 sshd[8228]: Accepted publickey for core from 147.75.109.163 port 45440 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:36:04.269662 sshd[8228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:36:04.272712 systemd-logind[1798]: New session 12 of user core. May 17 00:36:04.292664 systemd[1]: Started session-12.scope - Session 12 of User core. May 17 00:36:04.420054 sshd[8228]: pam_unix(sshd:session): session closed for user core May 17 00:36:04.421642 systemd[1]: sshd@12-147.75.203.231:22-147.75.109.163:45440.service: Deactivated successfully. May 17 00:36:04.422570 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:36:04.423258 systemd-logind[1798]: Session 12 logged out. Waiting for processes to exit. May 17 00:36:04.423764 systemd-logind[1798]: Removed session 12. May 17 00:36:09.438204 systemd[1]: Started sshd@13-147.75.203.231:22-147.75.109.163:59124.service - OpenSSH per-connection server daemon (147.75.109.163:59124). May 17 00:36:09.491012 sshd[8263]: Accepted publickey for core from 147.75.109.163 port 59124 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:36:09.492687 sshd[8263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:36:09.497103 systemd-logind[1798]: New session 13 of user core. May 17 00:36:09.514636 systemd[1]: Started session-13.scope - Session 13 of User core. May 17 00:36:09.616133 sshd[8263]: pam_unix(sshd:session): session closed for user core May 17 00:36:09.618417 systemd[1]: sshd@13-147.75.203.231:22-147.75.109.163:59124.service: Deactivated successfully. May 17 00:36:09.619764 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:36:09.620867 systemd-logind[1798]: Session 13 logged out. Waiting for processes to exit. May 17 00:36:09.621789 systemd-logind[1798]: Removed session 13. May 17 00:36:11.395748 kubelet[3069]: E0517 00:36:11.395661 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-66d7ff6d95-qgk6n" podUID="de7e5665-5768-4a36-925b-d749b053cd37" May 17 00:36:12.395578 containerd[1808]: time="2025-05-17T00:36:12.395447476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:36:12.756949 containerd[1808]: time="2025-05-17T00:36:12.756804592Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:36:12.757975 containerd[1808]: time="2025-05-17T00:36:12.757882246Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:36:12.757975 containerd[1808]: time="2025-05-17T00:36:12.757955733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:36:12.758145 kubelet[3069]: E0517 00:36:12.758085 3069 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:36:12.758409 kubelet[3069]: E0517 00:36:12.758146 3069 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:36:12.758409 kubelet[3069]: E0517 00:36:12.758252 3069 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s6kf7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-st7xr_calico-system(79106d86-9187-44ab-a1d7-9ef14c711cf6): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:36:12.759659 kubelet[3069]: E0517 00:36:12.759617 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-st7xr" podUID="79106d86-9187-44ab-a1d7-9ef14c711cf6" May 17 00:36:14.633490 systemd[1]: Started sshd@14-147.75.203.231:22-147.75.109.163:59128.service - OpenSSH per-connection server daemon (147.75.109.163:59128). May 17 00:36:14.662749 sshd[8293]: Accepted publickey for core from 147.75.109.163 port 59128 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:36:14.663452 sshd[8293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:36:14.665943 systemd-logind[1798]: New session 14 of user core. May 17 00:36:14.683582 systemd[1]: Started session-14.scope - Session 14 of User core. May 17 00:36:14.788905 sshd[8293]: pam_unix(sshd:session): session closed for user core May 17 00:36:14.811733 systemd[1]: sshd@14-147.75.203.231:22-147.75.109.163:59128.service: Deactivated successfully. May 17 00:36:14.815700 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:36:14.819152 systemd-logind[1798]: Session 14 logged out. Waiting for processes to exit. May 17 00:36:14.836154 systemd[1]: Started sshd@15-147.75.203.231:22-147.75.109.163:59142.service - OpenSSH per-connection server daemon (147.75.109.163:59142). May 17 00:36:14.838797 systemd-logind[1798]: Removed session 14. May 17 00:36:14.910017 sshd[8320]: Accepted publickey for core from 147.75.109.163 port 59142 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:36:14.910839 sshd[8320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:36:14.913993 systemd-logind[1798]: New session 15 of user core. May 17 00:36:14.931697 systemd[1]: Started session-15.scope - Session 15 of User core. May 17 00:36:15.078303 sshd[8320]: pam_unix(sshd:session): session closed for user core May 17 00:36:15.089196 systemd[1]: sshd@15-147.75.203.231:22-147.75.109.163:59142.service: Deactivated successfully. May 17 00:36:15.090077 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:36:15.090785 systemd-logind[1798]: Session 15 logged out. Waiting for processes to exit. May 17 00:36:15.091459 systemd[1]: Started sshd@16-147.75.203.231:22-147.75.109.163:59146.service - OpenSSH per-connection server daemon (147.75.109.163:59146). May 17 00:36:15.091965 systemd-logind[1798]: Removed session 15. May 17 00:36:15.120294 sshd[8344]: Accepted publickey for core from 147.75.109.163 port 59146 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:36:15.120957 sshd[8344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:36:15.123455 systemd-logind[1798]: New session 16 of user core. May 17 00:36:15.138556 systemd[1]: Started session-16.scope - Session 16 of User core. May 17 00:36:15.226977 sshd[8344]: pam_unix(sshd:session): session closed for user core May 17 00:36:15.229170 systemd[1]: sshd@16-147.75.203.231:22-147.75.109.163:59146.service: Deactivated successfully. May 17 00:36:15.230230 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:36:15.230725 systemd-logind[1798]: Session 16 logged out. Waiting for processes to exit. May 17 00:36:15.231320 systemd-logind[1798]: Removed session 16. May 17 00:36:20.247439 systemd[1]: Started sshd@17-147.75.203.231:22-147.75.109.163:50044.service - OpenSSH per-connection server daemon (147.75.109.163:50044). May 17 00:36:20.276650 sshd[8409]: Accepted publickey for core from 147.75.109.163 port 50044 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:36:20.277301 sshd[8409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:36:20.279781 systemd-logind[1798]: New session 17 of user core. May 17 00:36:20.301801 systemd[1]: Started session-17.scope - Session 17 of User core. May 17 00:36:20.395797 sshd[8409]: pam_unix(sshd:session): session closed for user core May 17 00:36:20.408404 systemd[1]: sshd@17-147.75.203.231:22-147.75.109.163:50044.service: Deactivated successfully. May 17 00:36:20.409303 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:36:20.410105 systemd-logind[1798]: Session 17 logged out. Waiting for processes to exit. May 17 00:36:20.410919 systemd[1]: Started sshd@18-147.75.203.231:22-147.75.109.163:50058.service - OpenSSH per-connection server daemon (147.75.109.163:50058). May 17 00:36:20.411617 systemd-logind[1798]: Removed session 17. May 17 00:36:20.450713 sshd[8435]: Accepted publickey for core from 147.75.109.163 port 50058 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:36:20.451556 sshd[8435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:36:20.454607 systemd-logind[1798]: New session 18 of user core. May 17 00:36:20.478590 systemd[1]: Started session-18.scope - Session 18 of User core. May 17 00:36:20.708609 sshd[8435]: pam_unix(sshd:session): session closed for user core May 17 00:36:20.733517 systemd[1]: sshd@18-147.75.203.231:22-147.75.109.163:50058.service: Deactivated successfully. May 17 00:36:20.735852 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:36:20.737837 systemd-logind[1798]: Session 18 logged out. Waiting for processes to exit. May 17 00:36:20.740259 systemd[1]: Started sshd@19-147.75.203.231:22-147.75.109.163:50068.service - OpenSSH per-connection server daemon (147.75.109.163:50068). May 17 00:36:20.742778 systemd-logind[1798]: Removed session 18. May 17 00:36:20.847957 sshd[8458]: Accepted publickey for core from 147.75.109.163 port 50068 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:36:20.849235 sshd[8458]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:36:20.853510 systemd-logind[1798]: New session 19 of user core. May 17 00:36:20.875816 systemd[1]: Started session-19.scope - Session 19 of User core. May 17 00:36:22.387682 sshd[8458]: pam_unix(sshd:session): session closed for user core May 17 00:36:22.405245 systemd[1]: sshd@19-147.75.203.231:22-147.75.109.163:50068.service: Deactivated successfully. May 17 00:36:22.407498 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:36:22.409275 systemd-logind[1798]: Session 19 logged out. Waiting for processes to exit. May 17 00:36:22.411032 systemd[1]: Started sshd@20-147.75.203.231:22-147.75.109.163:50084.service - OpenSSH per-connection server daemon (147.75.109.163:50084). May 17 00:36:22.412453 systemd-logind[1798]: Removed session 19. May 17 00:36:22.472680 sshd[8492]: Accepted publickey for core from 147.75.109.163 port 50084 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:36:22.473614 sshd[8492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:36:22.476690 systemd-logind[1798]: New session 20 of user core. May 17 00:36:22.490627 systemd[1]: Started session-20.scope - Session 20 of User core. May 17 00:36:22.667840 sshd[8492]: pam_unix(sshd:session): session closed for user core May 17 00:36:22.690233 systemd[1]: sshd@20-147.75.203.231:22-147.75.109.163:50084.service: Deactivated successfully. May 17 00:36:22.691755 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:36:22.693095 systemd-logind[1798]: Session 20 logged out. Waiting for processes to exit. May 17 00:36:22.694466 systemd[1]: Started sshd@21-147.75.203.231:22-147.75.109.163:50092.service - OpenSSH per-connection server daemon (147.75.109.163:50092). May 17 00:36:22.695500 systemd-logind[1798]: Removed session 20. May 17 00:36:22.774793 sshd[8516]: Accepted publickey for core from 147.75.109.163 port 50092 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:36:22.776588 sshd[8516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:36:22.781767 systemd-logind[1798]: New session 21 of user core. May 17 00:36:22.792649 systemd[1]: Started session-21.scope - Session 21 of User core. May 17 00:36:22.919405 sshd[8516]: pam_unix(sshd:session): session closed for user core May 17 00:36:22.921029 systemd[1]: sshd@21-147.75.203.231:22-147.75.109.163:50092.service: Deactivated successfully. May 17 00:36:22.921956 systemd[1]: session-21.scope: Deactivated successfully. May 17 00:36:22.922697 systemd-logind[1798]: Session 21 logged out. Waiting for processes to exit. May 17 00:36:22.923279 systemd-logind[1798]: Removed session 21. May 17 00:36:25.393537 kubelet[3069]: E0517 00:36:25.393513 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-st7xr" podUID="79106d86-9187-44ab-a1d7-9ef14c711cf6" May 17 00:36:25.393756 kubelet[3069]: E0517 00:36:25.393667 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-66d7ff6d95-qgk6n" podUID="de7e5665-5768-4a36-925b-d749b053cd37" May 17 00:36:27.943524 systemd[1]: Started sshd@22-147.75.203.231:22-147.75.109.163:50104.service - OpenSSH per-connection server daemon (147.75.109.163:50104). May 17 00:36:27.976425 sshd[8546]: Accepted publickey for core from 147.75.109.163 port 50104 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:36:27.977140 sshd[8546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:36:27.979749 systemd-logind[1798]: New session 22 of user core. May 17 00:36:27.991589 systemd[1]: Started session-22.scope - Session 22 of User core. May 17 00:36:28.078641 sshd[8546]: pam_unix(sshd:session): session closed for user core May 17 00:36:28.080233 systemd[1]: sshd@22-147.75.203.231:22-147.75.109.163:50104.service: Deactivated successfully. May 17 00:36:28.081157 systemd[1]: session-22.scope: Deactivated successfully. May 17 00:36:28.081908 systemd-logind[1798]: Session 22 logged out. Waiting for processes to exit. May 17 00:36:28.082550 systemd-logind[1798]: Removed session 22. May 17 00:36:33.095167 systemd[1]: Started sshd@23-147.75.203.231:22-147.75.109.163:57844.service - OpenSSH per-connection server daemon (147.75.109.163:57844). May 17 00:36:33.125369 sshd[8591]: Accepted publickey for core from 147.75.109.163 port 57844 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:36:33.126119 sshd[8591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:36:33.128804 systemd-logind[1798]: New session 23 of user core. May 17 00:36:33.129496 systemd[1]: Started session-23.scope - Session 23 of User core. May 17 00:36:33.212093 sshd[8591]: pam_unix(sshd:session): session closed for user core May 17 00:36:33.213763 systemd[1]: sshd@23-147.75.203.231:22-147.75.109.163:57844.service: Deactivated successfully. May 17 00:36:33.214718 systemd[1]: session-23.scope: Deactivated successfully. May 17 00:36:33.215375 systemd-logind[1798]: Session 23 logged out. Waiting for processes to exit. May 17 00:36:33.216014 systemd-logind[1798]: Removed session 23. May 17 00:36:36.394582 kubelet[3069]: E0517 00:36:36.394480 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-st7xr" podUID="79106d86-9187-44ab-a1d7-9ef14c711cf6" May 17 00:36:38.242081 systemd[1]: Started sshd@24-147.75.203.231:22-147.75.109.163:59982.service - OpenSSH per-connection server daemon (147.75.109.163:59982). May 17 00:36:38.304612 sshd[8617]: Accepted publickey for core from 147.75.109.163 port 59982 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:36:38.305566 sshd[8617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:36:38.308440 systemd-logind[1798]: New session 24 of user core. May 17 00:36:38.321640 systemd[1]: Started session-24.scope - Session 24 of User core. May 17 00:36:38.450620 sshd[8617]: pam_unix(sshd:session): session closed for user core May 17 00:36:38.452240 systemd[1]: sshd@24-147.75.203.231:22-147.75.109.163:59982.service: Deactivated successfully. May 17 00:36:38.453143 systemd[1]: session-24.scope: Deactivated successfully. May 17 00:36:38.453847 systemd-logind[1798]: Session 24 logged out. Waiting for processes to exit. May 17 00:36:38.454340 systemd-logind[1798]: Removed session 24.