May 17 00:22:27.019148 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri May 16 22:44:56 -00 2025 May 17 00:22:27.019162 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:22:27.019169 kernel: BIOS-provided physical RAM map: May 17 00:22:27.019174 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable May 17 00:22:27.019177 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved May 17 00:22:27.019181 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved May 17 00:22:27.019186 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable May 17 00:22:27.019190 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved May 17 00:22:27.019194 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b23fff] usable May 17 00:22:27.019198 kernel: BIOS-e820: [mem 0x0000000081b24000-0x0000000081b24fff] ACPI NVS May 17 00:22:27.019202 kernel: BIOS-e820: [mem 0x0000000081b25000-0x0000000081b25fff] reserved May 17 00:22:27.019207 kernel: BIOS-e820: [mem 0x0000000081b26000-0x000000008afccfff] usable May 17 00:22:27.019211 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved May 17 00:22:27.019216 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable May 17 00:22:27.019221 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS May 17 00:22:27.019225 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved May 17 00:22:27.019231 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable May 17 00:22:27.019235 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved May 17 00:22:27.019240 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 17 00:22:27.019244 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved May 17 00:22:27.019249 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved May 17 00:22:27.019253 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved May 17 00:22:27.019258 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved May 17 00:22:27.019262 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable May 17 00:22:27.019267 kernel: NX (Execute Disable) protection: active May 17 00:22:27.019272 kernel: APIC: Static calls initialized May 17 00:22:27.019276 kernel: SMBIOS 3.2.1 present. May 17 00:22:27.019281 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 2.6 12/03/2024 May 17 00:22:27.019286 kernel: tsc: Detected 3400.000 MHz processor May 17 00:22:27.019291 kernel: tsc: Detected 3399.906 MHz TSC May 17 00:22:27.019296 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:22:27.019301 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:22:27.019305 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 May 17 00:22:27.019310 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs May 17 00:22:27.019315 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:22:27.019320 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 May 17 00:22:27.019324 kernel: Using GB pages for direct mapping May 17 00:22:27.019330 kernel: ACPI: Early table checksum verification disabled May 17 00:22:27.019335 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) May 17 00:22:27.019340 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) May 17 00:22:27.019347 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) May 17 00:22:27.019352 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) May 17 00:22:27.019357 kernel: ACPI: FACS 0x000000008C66CF80 000040 May 17 00:22:27.019362 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) May 17 00:22:27.019368 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) May 17 00:22:27.019373 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) May 17 00:22:27.019378 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) May 17 00:22:27.019383 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) May 17 00:22:27.019388 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) May 17 00:22:27.019392 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) May 17 00:22:27.019397 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) May 17 00:22:27.019403 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) May 17 00:22:27.019408 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) May 17 00:22:27.019413 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) May 17 00:22:27.019418 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) May 17 00:22:27.019423 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) May 17 00:22:27.019428 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) May 17 00:22:27.019433 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) May 17 00:22:27.019438 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) May 17 00:22:27.019443 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) May 17 00:22:27.019449 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) May 17 00:22:27.019454 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) May 17 00:22:27.019459 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) May 17 00:22:27.019464 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) May 17 00:22:27.019469 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) May 17 00:22:27.019474 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) May 17 00:22:27.019479 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) May 17 00:22:27.019484 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) May 17 00:22:27.019489 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) May 17 00:22:27.019495 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) May 17 00:22:27.019499 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) May 17 00:22:27.019514 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] May 17 00:22:27.019540 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] May 17 00:22:27.019545 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] May 17 00:22:27.019566 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] May 17 00:22:27.019571 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] May 17 00:22:27.019576 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] May 17 00:22:27.019582 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] May 17 00:22:27.019587 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] May 17 00:22:27.019592 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] May 17 00:22:27.019597 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] May 17 00:22:27.019601 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] May 17 00:22:27.019606 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] May 17 00:22:27.019611 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] May 17 00:22:27.019616 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] May 17 00:22:27.019621 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] May 17 00:22:27.019627 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] May 17 00:22:27.019632 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] May 17 00:22:27.019637 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] May 17 00:22:27.019642 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] May 17 00:22:27.019647 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] May 17 00:22:27.019652 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] May 17 00:22:27.019656 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] May 17 00:22:27.019661 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] May 17 00:22:27.019666 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] May 17 00:22:27.019672 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] May 17 00:22:27.019677 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] May 17 00:22:27.019682 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] May 17 00:22:27.019687 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] May 17 00:22:27.019692 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] May 17 00:22:27.019697 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] May 17 00:22:27.019702 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] May 17 00:22:27.019707 kernel: No NUMA configuration found May 17 00:22:27.019712 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] May 17 00:22:27.019717 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] May 17 00:22:27.019723 kernel: Zone ranges: May 17 00:22:27.019728 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:22:27.019733 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 17 00:22:27.019738 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] May 17 00:22:27.019743 kernel: Movable zone start for each node May 17 00:22:27.019748 kernel: Early memory node ranges May 17 00:22:27.019753 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] May 17 00:22:27.019757 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] May 17 00:22:27.019762 kernel: node 0: [mem 0x0000000040400000-0x0000000081b23fff] May 17 00:22:27.019768 kernel: node 0: [mem 0x0000000081b26000-0x000000008afccfff] May 17 00:22:27.019773 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] May 17 00:22:27.019778 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] May 17 00:22:27.019783 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] May 17 00:22:27.019792 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] May 17 00:22:27.019798 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:22:27.019803 kernel: On node 0, zone DMA: 103 pages in unavailable ranges May 17 00:22:27.019808 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges May 17 00:22:27.019815 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges May 17 00:22:27.019820 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges May 17 00:22:27.019825 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges May 17 00:22:27.019831 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges May 17 00:22:27.019836 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges May 17 00:22:27.019842 kernel: ACPI: PM-Timer IO Port: 0x1808 May 17 00:22:27.019847 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) May 17 00:22:27.019852 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) May 17 00:22:27.019858 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) May 17 00:22:27.019864 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) May 17 00:22:27.019869 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) May 17 00:22:27.019874 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) May 17 00:22:27.019880 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) May 17 00:22:27.019885 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) May 17 00:22:27.019890 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) May 17 00:22:27.019895 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) May 17 00:22:27.019901 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) May 17 00:22:27.019906 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) May 17 00:22:27.019912 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) May 17 00:22:27.019917 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) May 17 00:22:27.019923 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) May 17 00:22:27.019928 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) May 17 00:22:27.019933 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 May 17 00:22:27.019939 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 17 00:22:27.019944 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 00:22:27.019949 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:22:27.019955 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 17 00:22:27.019961 kernel: TSC deadline timer available May 17 00:22:27.019966 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs May 17 00:22:27.019971 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices May 17 00:22:27.019977 kernel: Booting paravirtualized kernel on bare hardware May 17 00:22:27.019982 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:22:27.019988 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 May 17 00:22:27.019993 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 May 17 00:22:27.019998 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 May 17 00:22:27.020004 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 May 17 00:22:27.020010 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:22:27.020016 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:22:27.020021 kernel: random: crng init done May 17 00:22:27.020026 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) May 17 00:22:27.020032 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) May 17 00:22:27.020037 kernel: Fallback order for Node 0: 0 May 17 00:22:27.020042 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 May 17 00:22:27.020047 kernel: Policy zone: Normal May 17 00:22:27.020054 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:22:27.020059 kernel: software IO TLB: area num 16. May 17 00:22:27.020065 kernel: Memory: 32720300K/33452980K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42872K init, 2320K bss, 732420K reserved, 0K cma-reserved) May 17 00:22:27.020070 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 May 17 00:22:27.020075 kernel: ftrace: allocating 37948 entries in 149 pages May 17 00:22:27.020081 kernel: ftrace: allocated 149 pages with 4 groups May 17 00:22:27.020086 kernel: Dynamic Preempt: voluntary May 17 00:22:27.020091 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 00:22:27.020097 kernel: rcu: RCU event tracing is enabled. May 17 00:22:27.020103 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. May 17 00:22:27.020109 kernel: Trampoline variant of Tasks RCU enabled. May 17 00:22:27.020114 kernel: Rude variant of Tasks RCU enabled. May 17 00:22:27.020119 kernel: Tracing variant of Tasks RCU enabled. May 17 00:22:27.020125 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:22:27.020130 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 May 17 00:22:27.020135 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 May 17 00:22:27.020141 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 00:22:27.020146 kernel: Console: colour dummy device 80x25 May 17 00:22:27.020151 kernel: printk: console [tty0] enabled May 17 00:22:27.020157 kernel: printk: console [ttyS1] enabled May 17 00:22:27.020163 kernel: ACPI: Core revision 20230628 May 17 00:22:27.020168 kernel: hpet: HPET dysfunctional in PC10. Force disabled. May 17 00:22:27.020174 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:22:27.020179 kernel: DMAR: Host address width 39 May 17 00:22:27.020184 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 May 17 00:22:27.020189 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da May 17 00:22:27.020195 kernel: DMAR: RMRR base: 0x0000008cf19000 end: 0x0000008d162fff May 17 00:22:27.020200 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 May 17 00:22:27.020206 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 May 17 00:22:27.020212 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. May 17 00:22:27.020217 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode May 17 00:22:27.020223 kernel: x2apic enabled May 17 00:22:27.020228 kernel: APIC: Switched APIC routing to: cluster x2apic May 17 00:22:27.020233 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns May 17 00:22:27.020239 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) May 17 00:22:27.020244 kernel: CPU0: Thermal monitoring enabled (TM1) May 17 00:22:27.020250 kernel: process: using mwait in idle threads May 17 00:22:27.020256 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 17 00:22:27.020261 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 17 00:22:27.020266 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:22:27.020271 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit May 17 00:22:27.020276 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall May 17 00:22:27.020282 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS May 17 00:22:27.020287 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT May 17 00:22:27.020292 kernel: RETBleed: Mitigation: Enhanced IBRS May 17 00:22:27.020297 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 17 00:22:27.020303 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 17 00:22:27.020308 kernel: TAA: Mitigation: TSX disabled May 17 00:22:27.020314 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers May 17 00:22:27.020319 kernel: SRBDS: Mitigation: Microcode May 17 00:22:27.020325 kernel: GDS: Mitigation: Microcode May 17 00:22:27.020330 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:22:27.020335 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:22:27.020340 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:22:27.020346 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' May 17 00:22:27.020351 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' May 17 00:22:27.020356 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:22:27.020361 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 May 17 00:22:27.020366 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 May 17 00:22:27.020373 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. May 17 00:22:27.020378 kernel: Freeing SMP alternatives memory: 32K May 17 00:22:27.020383 kernel: pid_max: default: 32768 minimum: 301 May 17 00:22:27.020388 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 17 00:22:27.020394 kernel: landlock: Up and running. May 17 00:22:27.020399 kernel: SELinux: Initializing. May 17 00:22:27.020404 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:22:27.020410 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:22:27.020415 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) May 17 00:22:27.020420 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. May 17 00:22:27.020426 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. May 17 00:22:27.020432 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. May 17 00:22:27.020438 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. May 17 00:22:27.020443 kernel: ... version: 4 May 17 00:22:27.020448 kernel: ... bit width: 48 May 17 00:22:27.020454 kernel: ... generic registers: 4 May 17 00:22:27.020459 kernel: ... value mask: 0000ffffffffffff May 17 00:22:27.020464 kernel: ... max period: 00007fffffffffff May 17 00:22:27.020469 kernel: ... fixed-purpose events: 3 May 17 00:22:27.020475 kernel: ... event mask: 000000070000000f May 17 00:22:27.020481 kernel: signal: max sigframe size: 2032 May 17 00:22:27.020486 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 May 17 00:22:27.020492 kernel: rcu: Hierarchical SRCU implementation. May 17 00:22:27.020497 kernel: rcu: Max phase no-delay instances is 400. May 17 00:22:27.020509 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. May 17 00:22:27.020514 kernel: smp: Bringing up secondary CPUs ... May 17 00:22:27.020538 kernel: smpboot: x86: Booting SMP configuration: May 17 00:22:27.020544 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 May 17 00:22:27.020565 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. May 17 00:22:27.020572 kernel: smp: Brought up 1 node, 16 CPUs May 17 00:22:27.020577 kernel: smpboot: Max logical packages: 1 May 17 00:22:27.020582 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) May 17 00:22:27.020588 kernel: devtmpfs: initialized May 17 00:22:27.020593 kernel: x86/mm: Memory block size: 128MB May 17 00:22:27.020598 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b24000-0x81b24fff] (4096 bytes) May 17 00:22:27.020604 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) May 17 00:22:27.020609 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:22:27.020616 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) May 17 00:22:27.020621 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:22:27.020626 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:22:27.020632 kernel: audit: initializing netlink subsys (disabled) May 17 00:22:27.020637 kernel: audit: type=2000 audit(1747441341.038:1): state=initialized audit_enabled=0 res=1 May 17 00:22:27.020642 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:22:27.020648 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:22:27.020653 kernel: cpuidle: using governor menu May 17 00:22:27.020658 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:22:27.020664 kernel: dca service started, version 1.12.1 May 17 00:22:27.020670 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) May 17 00:22:27.020675 kernel: PCI: Using configuration type 1 for base access May 17 00:22:27.020681 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' May 17 00:22:27.020686 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:22:27.020691 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:22:27.020697 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 17 00:22:27.020702 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:22:27.020707 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 17 00:22:27.020713 kernel: ACPI: Added _OSI(Module Device) May 17 00:22:27.020719 kernel: ACPI: Added _OSI(Processor Device) May 17 00:22:27.020724 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:22:27.020729 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:22:27.020735 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded May 17 00:22:27.020740 kernel: ACPI: Dynamic OEM Table Load: May 17 00:22:27.020745 kernel: ACPI: SSDT 0xFFFF989901607000 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) May 17 00:22:27.020751 kernel: ACPI: Dynamic OEM Table Load: May 17 00:22:27.020756 kernel: ACPI: SSDT 0xFFFF9899015FE000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) May 17 00:22:27.020763 kernel: ACPI: Dynamic OEM Table Load: May 17 00:22:27.020768 kernel: ACPI: SSDT 0xFFFF9899015E5100 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) May 17 00:22:27.020773 kernel: ACPI: Dynamic OEM Table Load: May 17 00:22:27.020779 kernel: ACPI: SSDT 0xFFFF9899015FA000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) May 17 00:22:27.020784 kernel: ACPI: Dynamic OEM Table Load: May 17 00:22:27.020789 kernel: ACPI: SSDT 0xFFFF98990160A000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) May 17 00:22:27.020794 kernel: ACPI: Dynamic OEM Table Load: May 17 00:22:27.020800 kernel: ACPI: SSDT 0xFFFF989901606C00 00030A (v02 PmRef ApCst 00003000 INTL 20160527) May 17 00:22:27.020805 kernel: ACPI: _OSC evaluated successfully for all CPUs May 17 00:22:27.020810 kernel: ACPI: Interpreter enabled May 17 00:22:27.020816 kernel: ACPI: PM: (supports S0 S5) May 17 00:22:27.020822 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:22:27.020827 kernel: HEST: Enabling Firmware First mode for corrected errors. May 17 00:22:27.020833 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. May 17 00:22:27.020838 kernel: HEST: Table parsing has been initialized. May 17 00:22:27.020843 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. May 17 00:22:27.020848 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:22:27.020854 kernel: PCI: Ignoring E820 reservations for host bridge windows May 17 00:22:27.020859 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F May 17 00:22:27.020865 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource May 17 00:22:27.020871 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource May 17 00:22:27.020876 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource May 17 00:22:27.020881 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource May 17 00:22:27.020887 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource May 17 00:22:27.020892 kernel: ACPI: \_TZ_.FN00: New power resource May 17 00:22:27.020897 kernel: ACPI: \_TZ_.FN01: New power resource May 17 00:22:27.020903 kernel: ACPI: \_TZ_.FN02: New power resource May 17 00:22:27.020908 kernel: ACPI: \_TZ_.FN03: New power resource May 17 00:22:27.020914 kernel: ACPI: \_TZ_.FN04: New power resource May 17 00:22:27.020919 kernel: ACPI: \PIN_: New power resource May 17 00:22:27.020925 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) May 17 00:22:27.021022 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:22:27.021110 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] May 17 00:22:27.021158 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] May 17 00:22:27.021166 kernel: PCI host bridge to bus 0000:00 May 17 00:22:27.021220 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 17 00:22:27.021264 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 17 00:22:27.021308 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 17 00:22:27.021350 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] May 17 00:22:27.021393 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] May 17 00:22:27.021434 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] May 17 00:22:27.021493 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 May 17 00:22:27.021595 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 May 17 00:22:27.021647 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold May 17 00:22:27.021700 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 May 17 00:22:27.021750 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] May 17 00:22:27.021801 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 May 17 00:22:27.021850 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] May 17 00:22:27.021906 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 May 17 00:22:27.021956 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] May 17 00:22:27.022003 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold May 17 00:22:27.022056 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 May 17 00:22:27.022104 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] May 17 00:22:27.022152 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] May 17 00:22:27.022207 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 May 17 00:22:27.022256 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] May 17 00:22:27.022311 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 May 17 00:22:27.022359 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] May 17 00:22:27.022411 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 May 17 00:22:27.022460 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] May 17 00:22:27.022514 kernel: pci 0000:00:16.0: PME# supported from D3hot May 17 00:22:27.022601 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 May 17 00:22:27.022651 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] May 17 00:22:27.022707 kernel: pci 0000:00:16.1: PME# supported from D3hot May 17 00:22:27.022761 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 May 17 00:22:27.022809 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] May 17 00:22:27.022858 kernel: pci 0000:00:16.4: PME# supported from D3hot May 17 00:22:27.022912 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 May 17 00:22:27.022961 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] May 17 00:22:27.023010 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] May 17 00:22:27.023058 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] May 17 00:22:27.023107 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] May 17 00:22:27.023154 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] May 17 00:22:27.023202 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] May 17 00:22:27.023252 kernel: pci 0000:00:17.0: PME# supported from D3hot May 17 00:22:27.023306 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 May 17 00:22:27.023357 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold May 17 00:22:27.023414 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 May 17 00:22:27.023465 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold May 17 00:22:27.023540 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 May 17 00:22:27.023603 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold May 17 00:22:27.023659 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 May 17 00:22:27.023708 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold May 17 00:22:27.023761 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 May 17 00:22:27.023812 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold May 17 00:22:27.023865 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 May 17 00:22:27.023914 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] May 17 00:22:27.023967 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 May 17 00:22:27.024020 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 May 17 00:22:27.024068 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] May 17 00:22:27.024119 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] May 17 00:22:27.024174 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 May 17 00:22:27.024223 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] May 17 00:22:27.024277 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 May 17 00:22:27.024329 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] May 17 00:22:27.024378 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] May 17 00:22:27.024428 kernel: pci 0000:01:00.0: PME# supported from D3cold May 17 00:22:27.024481 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] May 17 00:22:27.024577 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) May 17 00:22:27.024634 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 May 17 00:22:27.024684 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] May 17 00:22:27.024735 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] May 17 00:22:27.024784 kernel: pci 0000:01:00.1: PME# supported from D3cold May 17 00:22:27.024834 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] May 17 00:22:27.024887 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) May 17 00:22:27.024937 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 17 00:22:27.024986 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] May 17 00:22:27.025034 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] May 17 00:22:27.025084 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] May 17 00:22:27.025137 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect May 17 00:22:27.025188 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 May 17 00:22:27.025241 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] May 17 00:22:27.025291 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] May 17 00:22:27.025341 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] May 17 00:22:27.025391 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold May 17 00:22:27.025439 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] May 17 00:22:27.025488 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] May 17 00:22:27.025574 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] May 17 00:22:27.025634 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect May 17 00:22:27.025721 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 May 17 00:22:27.025771 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] May 17 00:22:27.025823 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] May 17 00:22:27.025872 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] May 17 00:22:27.025923 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold May 17 00:22:27.025972 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] May 17 00:22:27.026022 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] May 17 00:22:27.026073 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] May 17 00:22:27.026122 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] May 17 00:22:27.026176 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 May 17 00:22:27.026227 kernel: pci 0000:06:00.0: enabling Extended Tags May 17 00:22:27.026277 kernel: pci 0000:06:00.0: supports D1 D2 May 17 00:22:27.026327 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 17 00:22:27.026377 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] May 17 00:22:27.026428 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] May 17 00:22:27.026477 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] May 17 00:22:27.026579 kernel: pci_bus 0000:07: extended config space not accessible May 17 00:22:27.026638 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 May 17 00:22:27.026691 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] May 17 00:22:27.026742 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] May 17 00:22:27.026795 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] May 17 00:22:27.026848 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 17 00:22:27.026900 kernel: pci 0000:07:00.0: supports D1 D2 May 17 00:22:27.026952 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 17 00:22:27.027002 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] May 17 00:22:27.027052 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] May 17 00:22:27.027102 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] May 17 00:22:27.027110 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 May 17 00:22:27.027116 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 May 17 00:22:27.027125 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 May 17 00:22:27.027131 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 May 17 00:22:27.027136 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 May 17 00:22:27.027142 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 May 17 00:22:27.027148 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 May 17 00:22:27.027153 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 May 17 00:22:27.027159 kernel: iommu: Default domain type: Translated May 17 00:22:27.027165 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:22:27.027170 kernel: PCI: Using ACPI for IRQ routing May 17 00:22:27.027177 kernel: PCI: pci_cache_line_size set to 64 bytes May 17 00:22:27.027183 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] May 17 00:22:27.027188 kernel: e820: reserve RAM buffer [mem 0x81b24000-0x83ffffff] May 17 00:22:27.027194 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] May 17 00:22:27.027199 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] May 17 00:22:27.027205 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] May 17 00:22:27.027210 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] May 17 00:22:27.027262 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device May 17 00:22:27.027314 kernel: pci 0000:07:00.0: vgaarb: bridge control possible May 17 00:22:27.027368 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 17 00:22:27.027377 kernel: vgaarb: loaded May 17 00:22:27.027383 kernel: clocksource: Switched to clocksource tsc-early May 17 00:22:27.027388 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:22:27.027394 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:22:27.027400 kernel: pnp: PnP ACPI init May 17 00:22:27.027450 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved May 17 00:22:27.027499 kernel: pnp 00:02: [dma 0 disabled] May 17 00:22:27.027597 kernel: pnp 00:03: [dma 0 disabled] May 17 00:22:27.027647 kernel: system 00:04: [io 0x0680-0x069f] has been reserved May 17 00:22:27.027694 kernel: system 00:04: [io 0x164e-0x164f] has been reserved May 17 00:22:27.027741 kernel: system 00:05: [io 0x1854-0x1857] has been reserved May 17 00:22:27.027790 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved May 17 00:22:27.027834 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved May 17 00:22:27.027881 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved May 17 00:22:27.027925 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved May 17 00:22:27.027973 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved May 17 00:22:27.028017 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved May 17 00:22:27.028064 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved May 17 00:22:27.028110 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved May 17 00:22:27.028158 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved May 17 00:22:27.028206 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved May 17 00:22:27.028249 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved May 17 00:22:27.028294 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved May 17 00:22:27.028337 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved May 17 00:22:27.028382 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved May 17 00:22:27.028426 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved May 17 00:22:27.028474 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved May 17 00:22:27.028484 kernel: pnp: PnP ACPI: found 10 devices May 17 00:22:27.028490 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:22:27.028496 kernel: NET: Registered PF_INET protocol family May 17 00:22:27.028504 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:22:27.028510 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) May 17 00:22:27.028516 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:22:27.028522 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:22:27.028553 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 17 00:22:27.028560 kernel: TCP: Hash tables configured (established 262144 bind 65536) May 17 00:22:27.028582 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) May 17 00:22:27.028588 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) May 17 00:22:27.028593 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:22:27.028599 kernel: NET: Registered PF_XDP protocol family May 17 00:22:27.028649 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] May 17 00:22:27.028699 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] May 17 00:22:27.028748 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] May 17 00:22:27.028799 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] May 17 00:22:27.028851 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] May 17 00:22:27.028902 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] May 17 00:22:27.028953 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] May 17 00:22:27.029002 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 17 00:22:27.029052 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] May 17 00:22:27.029100 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] May 17 00:22:27.029150 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] May 17 00:22:27.029201 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] May 17 00:22:27.029250 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] May 17 00:22:27.029297 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] May 17 00:22:27.029346 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] May 17 00:22:27.029395 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] May 17 00:22:27.029446 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] May 17 00:22:27.029494 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] May 17 00:22:27.029595 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] May 17 00:22:27.029646 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] May 17 00:22:27.029694 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] May 17 00:22:27.029743 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] May 17 00:22:27.029791 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] May 17 00:22:27.029840 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] May 17 00:22:27.029887 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc May 17 00:22:27.029933 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 17 00:22:27.029976 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 17 00:22:27.030020 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 17 00:22:27.030062 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] May 17 00:22:27.030105 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] May 17 00:22:27.030153 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] May 17 00:22:27.030199 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] May 17 00:22:27.030252 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] May 17 00:22:27.030297 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] May 17 00:22:27.030345 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] May 17 00:22:27.030391 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] May 17 00:22:27.030439 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] May 17 00:22:27.030485 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] May 17 00:22:27.030579 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] May 17 00:22:27.030627 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] May 17 00:22:27.030636 kernel: PCI: CLS 64 bytes, default 64 May 17 00:22:27.030642 kernel: DMAR: No ATSR found May 17 00:22:27.030647 kernel: DMAR: No SATC found May 17 00:22:27.030653 kernel: DMAR: dmar0: Using Queued invalidation May 17 00:22:27.030702 kernel: pci 0000:00:00.0: Adding to iommu group 0 May 17 00:22:27.030750 kernel: pci 0000:00:01.0: Adding to iommu group 1 May 17 00:22:27.030799 kernel: pci 0000:00:08.0: Adding to iommu group 2 May 17 00:22:27.030850 kernel: pci 0000:00:12.0: Adding to iommu group 3 May 17 00:22:27.030898 kernel: pci 0000:00:14.0: Adding to iommu group 4 May 17 00:22:27.030945 kernel: pci 0000:00:14.2: Adding to iommu group 4 May 17 00:22:27.030995 kernel: pci 0000:00:15.0: Adding to iommu group 5 May 17 00:22:27.031041 kernel: pci 0000:00:15.1: Adding to iommu group 5 May 17 00:22:27.031090 kernel: pci 0000:00:16.0: Adding to iommu group 6 May 17 00:22:27.031137 kernel: pci 0000:00:16.1: Adding to iommu group 6 May 17 00:22:27.031185 kernel: pci 0000:00:16.4: Adding to iommu group 6 May 17 00:22:27.031234 kernel: pci 0000:00:17.0: Adding to iommu group 7 May 17 00:22:27.031283 kernel: pci 0000:00:1b.0: Adding to iommu group 8 May 17 00:22:27.031332 kernel: pci 0000:00:1b.4: Adding to iommu group 9 May 17 00:22:27.031380 kernel: pci 0000:00:1b.5: Adding to iommu group 10 May 17 00:22:27.031430 kernel: pci 0000:00:1c.0: Adding to iommu group 11 May 17 00:22:27.031477 kernel: pci 0000:00:1c.3: Adding to iommu group 12 May 17 00:22:27.031575 kernel: pci 0000:00:1e.0: Adding to iommu group 13 May 17 00:22:27.031623 kernel: pci 0000:00:1f.0: Adding to iommu group 14 May 17 00:22:27.031674 kernel: pci 0000:00:1f.4: Adding to iommu group 14 May 17 00:22:27.031722 kernel: pci 0000:00:1f.5: Adding to iommu group 14 May 17 00:22:27.031773 kernel: pci 0000:01:00.0: Adding to iommu group 1 May 17 00:22:27.031821 kernel: pci 0000:01:00.1: Adding to iommu group 1 May 17 00:22:27.031871 kernel: pci 0000:03:00.0: Adding to iommu group 15 May 17 00:22:27.031922 kernel: pci 0000:04:00.0: Adding to iommu group 16 May 17 00:22:27.031972 kernel: pci 0000:06:00.0: Adding to iommu group 17 May 17 00:22:27.032026 kernel: pci 0000:07:00.0: Adding to iommu group 17 May 17 00:22:27.032036 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O May 17 00:22:27.032042 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 17 00:22:27.032048 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) May 17 00:22:27.032054 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer May 17 00:22:27.032059 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules May 17 00:22:27.032065 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules May 17 00:22:27.032071 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules May 17 00:22:27.032121 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) May 17 00:22:27.032132 kernel: Initialise system trusted keyrings May 17 00:22:27.032137 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 May 17 00:22:27.032143 kernel: Key type asymmetric registered May 17 00:22:27.032149 kernel: Asymmetric key parser 'x509' registered May 17 00:22:27.032154 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 17 00:22:27.032160 kernel: io scheduler mq-deadline registered May 17 00:22:27.032166 kernel: io scheduler kyber registered May 17 00:22:27.032171 kernel: io scheduler bfq registered May 17 00:22:27.032220 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 May 17 00:22:27.032268 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 May 17 00:22:27.032319 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 May 17 00:22:27.032367 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 May 17 00:22:27.032417 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 May 17 00:22:27.032465 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 May 17 00:22:27.032547 kernel: thermal LNXTHERM:00: registered as thermal_zone0 May 17 00:22:27.032576 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) May 17 00:22:27.032582 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. May 17 00:22:27.032589 kernel: pstore: Using crash dump compression: deflate May 17 00:22:27.032595 kernel: pstore: Registered erst as persistent store backend May 17 00:22:27.032601 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:22:27.032606 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:22:27.032612 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:22:27.032618 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A May 17 00:22:27.032624 kernel: hpet_acpi_add: no address or irqs in _CRS May 17 00:22:27.032675 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) May 17 00:22:27.032685 kernel: i8042: PNP: No PS/2 controller found. May 17 00:22:27.032730 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 May 17 00:22:27.032776 kernel: rtc_cmos rtc_cmos: registered as rtc0 May 17 00:22:27.032820 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-05-17T00:22:25 UTC (1747441345) May 17 00:22:27.032865 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram May 17 00:22:27.032874 kernel: intel_pstate: Intel P-state driver initializing May 17 00:22:27.032880 kernel: intel_pstate: Disabling energy efficiency optimization May 17 00:22:27.032886 kernel: intel_pstate: HWP enabled May 17 00:22:27.032893 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 May 17 00:22:27.032899 kernel: vesafb: scrolling: redraw May 17 00:22:27.032904 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 May 17 00:22:27.032910 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x00000000a636576d, using 768k, total 768k May 17 00:22:27.032916 kernel: Console: switching to colour frame buffer device 128x48 May 17 00:22:27.032922 kernel: fb0: VESA VGA frame buffer device May 17 00:22:27.032927 kernel: NET: Registered PF_INET6 protocol family May 17 00:22:27.032933 kernel: Segment Routing with IPv6 May 17 00:22:27.032938 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:22:27.032945 kernel: NET: Registered PF_PACKET protocol family May 17 00:22:27.032951 kernel: Key type dns_resolver registered May 17 00:22:27.032956 kernel: microcode: Current revision: 0x00000102 May 17 00:22:27.032962 kernel: microcode: Microcode Update Driver: v2.2. May 17 00:22:27.032968 kernel: IPI shorthand broadcast: enabled May 17 00:22:27.032973 kernel: sched_clock: Marking stable (2481165988, 1379250713)->(4396269308, -535852607) May 17 00:22:27.032979 kernel: registered taskstats version 1 May 17 00:22:27.032985 kernel: Loading compiled-in X.509 certificates May 17 00:22:27.032990 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 85b8d1234ceca483cb3defc2030d93f7792663c9' May 17 00:22:27.032997 kernel: Key type .fscrypt registered May 17 00:22:27.033003 kernel: Key type fscrypt-provisioning registered May 17 00:22:27.033008 kernel: ima: Allocated hash algorithm: sha1 May 17 00:22:27.033014 kernel: ima: No architecture policies found May 17 00:22:27.033020 kernel: clk: Disabling unused clocks May 17 00:22:27.033025 kernel: Freeing unused kernel image (initmem) memory: 42872K May 17 00:22:27.033031 kernel: Write protecting the kernel read-only data: 36864k May 17 00:22:27.033037 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 17 00:22:27.033042 kernel: Run /init as init process May 17 00:22:27.033049 kernel: with arguments: May 17 00:22:27.033055 kernel: /init May 17 00:22:27.033060 kernel: with environment: May 17 00:22:27.033066 kernel: HOME=/ May 17 00:22:27.033071 kernel: TERM=linux May 17 00:22:27.033077 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:22:27.033083 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:22:27.033091 systemd[1]: Detected architecture x86-64. May 17 00:22:27.033098 systemd[1]: Running in initrd. May 17 00:22:27.033104 systemd[1]: No hostname configured, using default hostname. May 17 00:22:27.033110 systemd[1]: Hostname set to . May 17 00:22:27.033115 systemd[1]: Initializing machine ID from random generator. May 17 00:22:27.033121 systemd[1]: Queued start job for default target initrd.target. May 17 00:22:27.033127 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:22:27.033133 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:22:27.033140 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 00:22:27.033147 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:22:27.033153 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 00:22:27.033159 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 00:22:27.033165 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 00:22:27.033171 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz May 17 00:22:27.033177 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 00:22:27.033184 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns May 17 00:22:27.033190 kernel: clocksource: Switched to clocksource tsc May 17 00:22:27.033196 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:22:27.033202 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:22:27.033208 systemd[1]: Reached target paths.target - Path Units. May 17 00:22:27.033214 systemd[1]: Reached target slices.target - Slice Units. May 17 00:22:27.033220 systemd[1]: Reached target swap.target - Swaps. May 17 00:22:27.033226 systemd[1]: Reached target timers.target - Timer Units. May 17 00:22:27.033232 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:22:27.033239 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:22:27.033245 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:22:27.033251 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:22:27.033257 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:22:27.033263 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:22:27.033269 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:22:27.033275 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:22:27.033281 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 00:22:27.033287 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:22:27.033294 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 00:22:27.033300 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:22:27.033306 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:22:27.033322 systemd-journald[267]: Collecting audit messages is disabled. May 17 00:22:27.033337 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:22:27.033343 systemd-journald[267]: Journal started May 17 00:22:27.033357 systemd-journald[267]: Runtime Journal (/run/log/journal/bfa758022144482cb83e6a737662a334) is 8.0M, max 639.9M, 631.9M free. May 17 00:22:27.047327 systemd-modules-load[269]: Inserted module 'overlay' May 17 00:22:27.075613 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:22:27.121529 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:22:27.121548 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:22:27.140754 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 00:22:27.140845 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:22:27.140936 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:22:27.159393 systemd-modules-load[269]: Inserted module 'br_netfilter' May 17 00:22:27.159507 kernel: Bridge firewalling registered May 17 00:22:27.164867 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:22:27.212614 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:22:27.229083 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:22:27.260974 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:22:27.282861 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:22:27.304048 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:22:27.344921 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:22:27.357717 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:22:27.359405 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:22:27.366945 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:22:27.367099 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:22:27.368161 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:22:27.384773 systemd-resolved[306]: Positive Trust Anchors: May 17 00:22:27.384777 systemd-resolved[306]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:22:27.384801 systemd-resolved[306]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:22:27.386320 systemd-resolved[306]: Defaulting to hostname 'linux'. May 17 00:22:27.389933 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:22:27.396881 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:22:27.429851 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:22:27.503663 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 00:22:27.576770 dracut-cmdline[310]: dracut-dracut-053 May 17 00:22:27.583738 dracut-cmdline[310]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:22:27.783549 kernel: SCSI subsystem initialized May 17 00:22:27.805544 kernel: Loading iSCSI transport class v2.0-870. May 17 00:22:27.829548 kernel: iscsi: registered transport (tcp) May 17 00:22:27.860639 kernel: iscsi: registered transport (qla4xxx) May 17 00:22:27.860656 kernel: QLogic iSCSI HBA Driver May 17 00:22:27.893704 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 00:22:27.921858 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 00:22:27.979639 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:22:27.979660 kernel: device-mapper: uevent: version 1.0.3 May 17 00:22:27.999444 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 17 00:22:28.058509 kernel: raid6: avx2x4 gen() 51999 MB/s May 17 00:22:28.090508 kernel: raid6: avx2x2 gen() 52677 MB/s May 17 00:22:28.126959 kernel: raid6: avx2x1 gen() 44087 MB/s May 17 00:22:28.126994 kernel: raid6: using algorithm avx2x2 gen() 52677 MB/s May 17 00:22:28.175016 kernel: raid6: .... xor() 31242 MB/s, rmw enabled May 17 00:22:28.175050 kernel: raid6: using avx2x2 recovery algorithm May 17 00:22:28.216509 kernel: xor: automatically using best checksumming function avx May 17 00:22:28.330513 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 00:22:28.336093 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 00:22:28.366816 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:22:28.373921 systemd-udevd[494]: Using default interface naming scheme 'v255'. May 17 00:22:28.376376 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:22:28.408703 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 00:22:28.454849 dracut-pre-trigger[506]: rd.md=0: removing MD RAID activation May 17 00:22:28.472721 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:22:28.505944 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:22:28.567532 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:22:28.600370 kernel: pps_core: LinuxPPS API ver. 1 registered May 17 00:22:28.600385 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 17 00:22:28.611510 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:22:28.613674 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 00:22:28.629134 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:22:28.656055 kernel: PTP clock support registered May 17 00:22:28.656071 kernel: ACPI: bus type USB registered May 17 00:22:28.656080 kernel: usbcore: registered new interface driver usbfs May 17 00:22:28.629250 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:22:28.718075 kernel: usbcore: registered new interface driver hub May 17 00:22:28.718099 kernel: usbcore: registered new device driver usb May 17 00:22:28.718109 kernel: libata version 3.00 loaded. May 17 00:22:28.718119 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:22:28.718128 kernel: ahci 0000:00:17.0: version 3.0 May 17 00:22:28.732406 kernel: AES CTR mode by8 optimization enabled May 17 00:22:28.732424 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode May 17 00:22:28.760511 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst May 17 00:22:28.782271 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:22:28.796355 kernel: scsi host0: ahci May 17 00:22:28.796448 kernel: scsi host1: ahci May 17 00:22:28.810397 kernel: scsi host2: ahci May 17 00:22:28.819614 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:22:28.819990 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:22:28.848323 kernel: scsi host3: ahci May 17 00:22:28.848415 kernel: scsi host4: ahci May 17 00:22:28.848479 kernel: scsi host5: ahci May 17 00:22:28.844693 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:22:28.986869 kernel: scsi host6: ahci May 17 00:22:28.986996 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 May 17 00:22:28.987013 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 May 17 00:22:28.987028 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 May 17 00:22:28.987040 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 May 17 00:22:28.987052 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 May 17 00:22:28.987067 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 May 17 00:22:28.987081 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 May 17 00:22:28.999348 kernel: igb: Intel(R) Gigabit Ethernet Network Driver May 17 00:22:28.999367 kernel: mlx5_core 0000:01:00.0: firmware version: 14.28.2006 May 17 00:22:28.999455 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. May 17 00:22:28.999464 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) May 17 00:22:29.045769 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:22:29.117439 kernel: igb 0000:03:00.0: added PHC on eth0 May 17 00:22:29.117538 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection May 17 00:22:29.117616 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:d8:7a May 17 00:22:29.117683 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 May 17 00:22:29.117747 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) May 17 00:22:29.117762 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 00:22:29.119359 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:22:29.119382 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:22:29.119407 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:22:29.119826 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 00:22:29.240603 kernel: igb 0000:04:00.0: added PHC on eth1 May 17 00:22:29.240694 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection May 17 00:22:29.240764 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:d8:7b May 17 00:22:29.240829 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 May 17 00:22:29.240893 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) May 17 00:22:29.120256 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:22:29.120306 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:22:29.257683 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:22:29.295617 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:22:29.450004 kernel: ata3: SATA link down (SStatus 0 SControl 300) May 17 00:22:29.450020 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) May 17 00:22:29.450121 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) May 17 00:22:29.450130 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged May 17 00:22:29.450199 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 17 00:22:29.450208 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 17 00:22:29.450215 kernel: ata7: SATA link down (SStatus 0 SControl 300) May 17 00:22:29.450222 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 17 00:22:29.450229 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) May 17 00:22:29.450237 kernel: ata1.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 May 17 00:22:29.450244 kernel: ata2.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 May 17 00:22:29.407324 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 00:22:29.521427 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA May 17 00:22:29.521440 kernel: ata1.00: Features: NCQ-prio May 17 00:22:29.521448 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA May 17 00:22:29.521455 kernel: ata2.00: Features: NCQ-prio May 17 00:22:29.521462 kernel: ata1.00: configured for UDMA/133 May 17 00:22:29.521469 kernel: ata2.00: configured for UDMA/133 May 17 00:22:29.521476 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 May 17 00:22:29.529523 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 17 00:22:29.529672 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:22:29.590261 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 May 17 00:22:29.590356 kernel: mlx5_core 0000:01:00.1: firmware version: 14.28.2006 May 17 00:22:29.590440 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) May 17 00:22:29.607534 kernel: igb 0000:04:00.0 eno2: renamed from eth1 May 17 00:22:29.642943 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller May 17 00:22:29.643071 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 May 17 00:22:29.666679 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 May 17 00:22:29.666808 kernel: igb 0000:03:00.0 eno1: renamed from eth0 May 17 00:22:29.666915 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller May 17 00:22:29.705654 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 May 17 00:22:29.721771 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed May 17 00:22:29.734884 kernel: hub 1-0:1.0: USB hub found May 17 00:22:29.748322 kernel: hub 1-0:1.0: 16 ports detected May 17 00:22:29.772514 kernel: hub 2-0:1.0: USB hub found May 17 00:22:29.772634 kernel: hub 2-0:1.0: 10 ports detected May 17 00:22:29.793495 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:22:30.279847 kernel: ata1.00: Enabling discard_zeroes_data May 17 00:22:30.279864 kernel: ata2.00: Enabling discard_zeroes_data May 17 00:22:30.279872 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) May 17 00:22:30.279963 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) May 17 00:22:30.280033 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks May 17 00:22:30.280097 kernel: sd 1:0:0:0: [sda] Write Protect is off May 17 00:22:30.280159 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 May 17 00:22:30.280220 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 17 00:22:30.280285 kernel: sd 1:0:0:0: [sda] Preferred minimum I/O size 4096 bytes May 17 00:22:30.280348 kernel: ata2.00: Enabling discard_zeroes_data May 17 00:22:30.280357 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:22:30.280364 kernel: GPT:9289727 != 937703087 May 17 00:22:30.280372 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:22:30.280379 kernel: GPT:9289727 != 937703087 May 17 00:22:30.280386 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:22:30.280393 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:22:30.280400 kernel: sd 1:0:0:0: [sda] Attached SCSI disk May 17 00:22:30.280461 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) May 17 00:22:30.280540 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks May 17 00:22:30.280604 kernel: sd 0:0:0:0: [sdb] Write Protect is off May 17 00:22:30.280666 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged May 17 00:22:30.280732 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 May 17 00:22:30.280794 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd May 17 00:22:30.280899 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 17 00:22:30.280966 kernel: sd 0:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes May 17 00:22:30.281028 kernel: ata1.00: Enabling discard_zeroes_data May 17 00:22:30.281037 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 17 00:22:30.281104 kernel: hub 1-14:1.0: USB hub found May 17 00:22:30.281182 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk May 17 00:22:30.281245 kernel: hub 1-14:1.0: 4 ports detected May 17 00:22:30.281317 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth2 May 17 00:22:30.281386 kernel: BTRFS: device fsid 7f88d479-6686-439c-8052-b96f0a9d77bc devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (561) May 17 00:22:30.281395 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (694) May 17 00:22:30.271842 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5200_MTFDDAK480TDN EFI-SYSTEM. May 17 00:22:30.312639 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth0 May 17 00:22:30.303214 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5200_MTFDDAK480TDN ROOT. May 17 00:22:30.329807 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5200_MTFDDAK480TDN USR-A. May 17 00:22:30.334643 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5200_MTFDDAK480TDN USR-A. May 17 00:22:30.378040 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5200_MTFDDAK480TDN OEM. May 17 00:22:30.416852 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 00:22:30.456621 kernel: ata2.00: Enabling discard_zeroes_data May 17 00:22:30.456635 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:22:30.456643 disk-uuid[733]: Primary Header is updated. May 17 00:22:30.456643 disk-uuid[733]: Secondary Entries is updated. May 17 00:22:30.456643 disk-uuid[733]: Secondary Header is updated. May 17 00:22:30.539582 kernel: ata2.00: Enabling discard_zeroes_data May 17 00:22:30.539595 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd May 17 00:22:30.539619 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:22:30.539627 kernel: ata2.00: Enabling discard_zeroes_data May 17 00:22:30.539633 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:22:30.608510 kernel: hid: raw HID events driver (C) Jiri Kosina May 17 00:22:30.630937 kernel: usbcore: registered new interface driver usbhid May 17 00:22:30.630968 kernel: usbhid: USB HID core driver May 17 00:22:30.674513 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 May 17 00:22:30.760515 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 May 17 00:22:30.760834 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 May 17 00:22:30.831507 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 May 17 00:22:31.528182 kernel: ata2.00: Enabling discard_zeroes_data May 17 00:22:31.548523 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:22:31.548859 disk-uuid[734]: The operation has completed successfully. May 17 00:22:31.584838 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:22:31.584902 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 00:22:31.634877 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 00:22:31.673608 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 17 00:22:31.673624 sh[751]: Success May 17 00:22:31.703452 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 00:22:31.726847 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 00:22:31.734814 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 00:22:31.787342 kernel: BTRFS info (device dm-0): first mount of filesystem 7f88d479-6686-439c-8052-b96f0a9d77bc May 17 00:22:31.787363 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 17 00:22:31.809442 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 17 00:22:31.829118 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 17 00:22:31.847654 kernel: BTRFS info (device dm-0): using free space tree May 17 00:22:31.886543 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 17 00:22:31.888398 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 00:22:31.897987 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 00:22:31.908788 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 00:22:32.045851 kernel: BTRFS info (device sda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:22:32.045866 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:22:32.045874 kernel: BTRFS info (device sda6): using free space tree May 17 00:22:32.045881 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:22:32.045888 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:22:32.045894 kernel: BTRFS info (device sda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:22:32.051810 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 00:22:32.052174 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 00:22:32.099747 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 00:22:32.109753 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:22:32.164735 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:22:32.179647 systemd-networkd[935]: lo: Link UP May 17 00:22:32.176769 ignition[902]: Ignition 2.19.0 May 17 00:22:32.179650 systemd-networkd[935]: lo: Gained carrier May 17 00:22:32.176775 ignition[902]: Stage: fetch-offline May 17 00:22:32.179865 unknown[902]: fetched base config from "system" May 17 00:22:32.176812 ignition[902]: no configs at "/usr/lib/ignition/base.d" May 17 00:22:32.179871 unknown[902]: fetched user config from "system" May 17 00:22:32.176820 ignition[902]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:22:32.180968 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:22:32.176895 ignition[902]: parsed url from cmdline: "" May 17 00:22:32.182750 systemd-networkd[935]: Enumeration completed May 17 00:22:32.176897 ignition[902]: no config URL provided May 17 00:22:32.183662 systemd-networkd[935]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:22:32.176902 ignition[902]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:22:32.196737 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:22:32.176933 ignition[902]: parsing config with SHA512: a516ed255eb6a311a751fb80b36575474f66475f7b413d09e73d6c631486ff69192ffcc8df7056f73fb064cce03ffd15f4ed410cca79cedb63ea7d83faa299c0 May 17 00:22:32.211351 systemd-networkd[935]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:22:32.180196 ignition[902]: fetch-offline: fetch-offline passed May 17 00:22:32.214977 systemd[1]: Reached target network.target - Network. May 17 00:22:32.180199 ignition[902]: POST message to Packet Timeline May 17 00:22:32.219748 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 17 00:22:32.180203 ignition[902]: POST Status error: resource requires networking May 17 00:22:32.416601 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up May 17 00:22:32.238723 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 00:22:32.180248 ignition[902]: Ignition finished successfully May 17 00:22:32.239935 systemd-networkd[935]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:22:32.250371 ignition[950]: Ignition 2.19.0 May 17 00:22:32.415292 systemd-networkd[935]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:22:32.250378 ignition[950]: Stage: kargs May 17 00:22:32.250590 ignition[950]: no configs at "/usr/lib/ignition/base.d" May 17 00:22:32.250603 ignition[950]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:22:32.251762 ignition[950]: kargs: kargs passed May 17 00:22:32.251767 ignition[950]: POST message to Packet Timeline May 17 00:22:32.251783 ignition[950]: GET https://metadata.packet.net/metadata: attempt #1 May 17 00:22:32.252529 ignition[950]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:51280->[::1]:53: read: connection refused May 17 00:22:32.452827 ignition[950]: GET https://metadata.packet.net/metadata: attempt #2 May 17 00:22:32.453268 ignition[950]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:42107->[::1]:53: read: connection refused May 17 00:22:32.606600 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up May 17 00:22:32.607650 systemd-networkd[935]: eno1: Link UP May 17 00:22:32.607972 systemd-networkd[935]: eno2: Link UP May 17 00:22:32.608154 systemd-networkd[935]: enp1s0f0np0: Link UP May 17 00:22:32.608360 systemd-networkd[935]: enp1s0f0np0: Gained carrier May 17 00:22:32.617792 systemd-networkd[935]: enp1s0f1np1: Link UP May 17 00:22:32.648685 systemd-networkd[935]: enp1s0f0np0: DHCPv4 address 147.75.202.203/31, gateway 147.75.202.202 acquired from 145.40.83.140 May 17 00:22:32.853622 ignition[950]: GET https://metadata.packet.net/metadata: attempt #3 May 17 00:22:32.854670 ignition[950]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:47383->[::1]:53: read: connection refused May 17 00:22:33.434314 systemd-networkd[935]: enp1s0f1np1: Gained carrier May 17 00:22:33.626142 systemd-networkd[935]: enp1s0f0np0: Gained IPv6LL May 17 00:22:33.655204 ignition[950]: GET https://metadata.packet.net/metadata: attempt #4 May 17 00:22:33.656383 ignition[950]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:35019->[::1]:53: read: connection refused May 17 00:22:35.256670 ignition[950]: GET https://metadata.packet.net/metadata: attempt #5 May 17 00:22:35.257811 ignition[950]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:33333->[::1]:53: read: connection refused May 17 00:22:35.289810 systemd-networkd[935]: enp1s0f1np1: Gained IPv6LL May 17 00:22:38.460474 ignition[950]: GET https://metadata.packet.net/metadata: attempt #6 May 17 00:22:39.437024 ignition[950]: GET result: OK May 17 00:22:40.006912 ignition[950]: Ignition finished successfully May 17 00:22:40.011794 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 00:22:40.044771 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 00:22:40.051674 ignition[968]: Ignition 2.19.0 May 17 00:22:40.051679 ignition[968]: Stage: disks May 17 00:22:40.051798 ignition[968]: no configs at "/usr/lib/ignition/base.d" May 17 00:22:40.051805 ignition[968]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:22:40.052445 ignition[968]: disks: disks passed May 17 00:22:40.052448 ignition[968]: POST message to Packet Timeline May 17 00:22:40.052458 ignition[968]: GET https://metadata.packet.net/metadata: attempt #1 May 17 00:22:40.872014 ignition[968]: GET result: OK May 17 00:22:41.215654 ignition[968]: Ignition finished successfully May 17 00:22:41.218674 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 00:22:41.233260 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 00:22:41.251819 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:22:41.272736 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:22:41.293917 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:22:41.304073 systemd[1]: Reached target basic.target - Basic System. May 17 00:22:41.339758 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 00:22:41.374812 systemd-fsck[984]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 17 00:22:41.386085 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 00:22:41.406752 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 00:22:41.508557 kernel: EXT4-fs (sda9): mounted filesystem 278698a4-82b6-49b4-b6df-f7999ed4e35e r/w with ordered data mode. Quota mode: none. May 17 00:22:41.509099 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 00:22:41.518989 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 00:22:41.542566 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:22:41.562137 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 00:22:41.607621 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (993) May 17 00:22:41.607635 kernel: BTRFS info (device sda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:22:41.576809 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 17 00:22:41.670592 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:22:41.670604 kernel: BTRFS info (device sda6): using free space tree May 17 00:22:41.670611 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:22:41.675555 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:22:41.712810 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... May 17 00:22:41.712943 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:22:41.712959 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:22:41.743542 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:22:41.794677 coreos-metadata[1011]: May 17 00:22:41.761 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 00:22:41.796827 coreos-metadata[995]: May 17 00:22:41.761 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 00:22:41.761802 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 00:22:41.796762 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 00:22:41.846686 initrd-setup-root[1025]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:22:41.856616 initrd-setup-root[1032]: cut: /sysroot/etc/group: No such file or directory May 17 00:22:41.866630 initrd-setup-root[1039]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:22:41.876623 initrd-setup-root[1046]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:22:41.878725 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 00:22:41.914717 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 00:22:41.933586 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 00:22:41.958819 kernel: BTRFS info (device sda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:22:41.959340 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 00:22:41.981112 ignition[1113]: INFO : Ignition 2.19.0 May 17 00:22:41.981112 ignition[1113]: INFO : Stage: mount May 17 00:22:41.996601 ignition[1113]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:22:41.996601 ignition[1113]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:22:41.996601 ignition[1113]: INFO : mount: mount passed May 17 00:22:41.996601 ignition[1113]: INFO : POST message to Packet Timeline May 17 00:22:41.996601 ignition[1113]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 17 00:22:41.991710 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 00:22:42.709970 coreos-metadata[1011]: May 17 00:22:42.709 INFO Fetch successful May 17 00:22:42.778106 coreos-metadata[995]: May 17 00:22:42.778 INFO Fetch successful May 17 00:22:42.790585 systemd[1]: flatcar-static-network.service: Deactivated successfully. May 17 00:22:42.790646 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. May 17 00:22:42.811673 coreos-metadata[995]: May 17 00:22:42.809 INFO wrote hostname ci-4081.3.3-n-750554c5a6 to /sysroot/etc/hostname May 17 00:22:42.810964 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 00:22:42.936739 ignition[1113]: INFO : GET result: OK May 17 00:22:43.268165 ignition[1113]: INFO : Ignition finished successfully May 17 00:22:43.270564 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 00:22:43.305807 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 00:22:43.325040 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:22:43.390656 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1139) May 17 00:22:43.390685 kernel: BTRFS info (device sda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:22:43.411201 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:22:43.429462 kernel: BTRFS info (device sda6): using free space tree May 17 00:22:43.469432 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:22:43.469456 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:22:43.483596 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:22:43.509966 ignition[1156]: INFO : Ignition 2.19.0 May 17 00:22:43.509966 ignition[1156]: INFO : Stage: files May 17 00:22:43.524789 ignition[1156]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:22:43.524789 ignition[1156]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:22:43.524789 ignition[1156]: DEBUG : files: compiled without relabeling support, skipping May 17 00:22:43.524789 ignition[1156]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:22:43.524789 ignition[1156]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:22:43.524789 ignition[1156]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:22:43.524789 ignition[1156]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:22:43.524789 ignition[1156]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:22:43.524789 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 17 00:22:43.524789 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 17 00:22:43.524789 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:22:43.524789 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 17 00:22:43.513924 unknown[1156]: wrote ssh authorized keys file for user: core May 17 00:22:43.689824 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 17 00:22:44.018409 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:22:44.035796 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 17 00:22:44.035796 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:22:44.035796 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:22:44.035796 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:22:44.035796 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:22:44.035796 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:22:44.035796 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:22:44.035796 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:22:44.035796 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:22:44.035796 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:22:44.035796 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:22:44.035796 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:22:44.035796 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:22:44.035796 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 May 17 00:22:44.815761 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 17 00:22:45.081382 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:22:45.081382 ignition[1156]: INFO : files: op(c): [started] processing unit "containerd.service" May 17 00:22:45.111725 ignition[1156]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 17 00:22:45.111725 ignition[1156]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 17 00:22:45.111725 ignition[1156]: INFO : files: op(c): [finished] processing unit "containerd.service" May 17 00:22:45.111725 ignition[1156]: INFO : files: op(e): [started] processing unit "prepare-helm.service" May 17 00:22:45.111725 ignition[1156]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:22:45.111725 ignition[1156]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:22:45.111725 ignition[1156]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" May 17 00:22:45.111725 ignition[1156]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 17 00:22:45.111725 ignition[1156]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:22:45.111725 ignition[1156]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:22:45.111725 ignition[1156]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:22:45.111725 ignition[1156]: INFO : files: files passed May 17 00:22:45.111725 ignition[1156]: INFO : POST message to Packet Timeline May 17 00:22:45.111725 ignition[1156]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 17 00:22:46.127723 ignition[1156]: INFO : GET result: OK May 17 00:22:46.533332 ignition[1156]: INFO : Ignition finished successfully May 17 00:22:46.536204 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 00:22:46.567738 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 00:22:46.568141 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 00:22:46.587139 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:22:46.587205 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 00:22:46.622735 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:22:46.640890 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 00:22:46.693864 initrd-setup-root-after-ignition[1196]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:22:46.693864 initrd-setup-root-after-ignition[1196]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 00:22:46.670910 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 00:22:46.723212 initrd-setup-root-after-ignition[1201]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:22:46.774726 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:22:46.774776 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 00:22:46.793781 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 00:22:46.816764 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 00:22:46.836871 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 00:22:46.854717 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 00:22:46.878845 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:22:46.910039 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 00:22:46.939334 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 00:22:46.951176 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:22:46.972366 systemd[1]: Stopped target timers.target - Timer Units. May 17 00:22:46.982497 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:22:46.982925 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:22:47.029986 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 00:22:47.040232 systemd[1]: Stopped target basic.target - Basic System. May 17 00:22:47.060233 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 00:22:47.078222 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:22:47.099226 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 00:22:47.109526 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 00:22:47.138236 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:22:47.148429 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 00:22:47.170416 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 00:22:47.187392 systemd[1]: Stopped target swap.target - Swaps. May 17 00:22:47.214057 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:22:47.214456 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 00:22:47.250008 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 00:22:47.260245 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:22:47.281076 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 00:22:47.281526 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:22:47.292280 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:22:47.292702 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 00:22:47.333215 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:22:47.333689 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:22:47.353388 systemd[1]: Stopped target paths.target - Path Units. May 17 00:22:47.363256 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:22:47.363686 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:22:47.380403 systemd[1]: Stopped target slices.target - Slice Units. May 17 00:22:47.401371 systemd[1]: Stopped target sockets.target - Socket Units. May 17 00:22:47.417338 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:22:47.417678 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:22:47.445254 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:22:47.445588 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:22:47.458464 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:22:47.458886 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:22:47.474457 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:22:47.474868 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 00:22:47.613748 ignition[1221]: INFO : Ignition 2.19.0 May 17 00:22:47.613748 ignition[1221]: INFO : Stage: umount May 17 00:22:47.613748 ignition[1221]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:22:47.613748 ignition[1221]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:22:47.613748 ignition[1221]: INFO : umount: umount passed May 17 00:22:47.613748 ignition[1221]: INFO : POST message to Packet Timeline May 17 00:22:47.613748 ignition[1221]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 17 00:22:47.503299 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 17 00:22:47.503725 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 00:22:47.532801 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 00:22:47.566401 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 00:22:47.577922 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:22:47.578355 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:22:47.605704 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:22:47.605784 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:22:47.641910 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:22:47.642450 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:22:47.642528 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 00:22:47.651366 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:22:47.651467 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 00:22:48.585995 ignition[1221]: INFO : GET result: OK May 17 00:22:48.999894 ignition[1221]: INFO : Ignition finished successfully May 17 00:22:49.001474 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:22:49.001644 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 00:22:49.019872 systemd[1]: Stopped target network.target - Network. May 17 00:22:49.034751 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:22:49.034942 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 00:22:49.053889 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:22:49.054049 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 00:22:49.072914 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:22:49.073070 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 00:22:49.092920 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 00:22:49.093083 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 00:22:49.112901 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:22:49.113069 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 00:22:49.132382 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 00:22:49.142626 systemd-networkd[935]: enp1s0f0np0: DHCPv6 lease lost May 17 00:22:49.150739 systemd-networkd[935]: enp1s0f1np1: DHCPv6 lease lost May 17 00:22:49.150986 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 00:22:49.170590 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:22:49.170865 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 00:22:49.191158 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:22:49.191603 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 00:22:49.211117 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:22:49.211241 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 00:22:49.240716 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 00:22:49.257658 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:22:49.257693 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:22:49.278790 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:22:49.278869 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:22:49.297848 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:22:49.297974 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 00:22:49.315899 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 00:22:49.316067 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:22:49.338251 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:22:49.358899 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:22:49.359275 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:22:49.390615 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:22:49.390759 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 00:22:49.397055 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:22:49.397154 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:22:49.423787 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:22:49.424016 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 00:22:49.454097 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:22:49.454379 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 00:22:49.798774 systemd-journald[267]: Received SIGTERM from PID 1 (systemd). May 17 00:22:49.798800 systemd-journald[267]: Failed to send stream file descriptor to service manager: Connection refused May 17 00:22:49.484098 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:22:49.484261 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:22:49.529628 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 00:22:49.555699 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:22:49.555759 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:22:49.576610 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 17 00:22:49.576644 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:22:49.598649 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:22:49.598707 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:22:49.617710 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:22:49.617810 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:22:49.640815 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:22:49.641076 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 00:22:49.660362 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:22:49.660619 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 00:22:49.682686 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 00:22:49.714942 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 00:22:49.733881 systemd[1]: Switching root. May 17 00:22:49.906168 systemd-journald[267]: Journal stopped May 17 00:22:27.019148 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri May 16 22:44:56 -00 2025 May 17 00:22:27.019162 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:22:27.019169 kernel: BIOS-provided physical RAM map: May 17 00:22:27.019174 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable May 17 00:22:27.019177 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved May 17 00:22:27.019181 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved May 17 00:22:27.019186 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable May 17 00:22:27.019190 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved May 17 00:22:27.019194 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b23fff] usable May 17 00:22:27.019198 kernel: BIOS-e820: [mem 0x0000000081b24000-0x0000000081b24fff] ACPI NVS May 17 00:22:27.019202 kernel: BIOS-e820: [mem 0x0000000081b25000-0x0000000081b25fff] reserved May 17 00:22:27.019207 kernel: BIOS-e820: [mem 0x0000000081b26000-0x000000008afccfff] usable May 17 00:22:27.019211 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved May 17 00:22:27.019216 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable May 17 00:22:27.019221 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS May 17 00:22:27.019225 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved May 17 00:22:27.019231 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable May 17 00:22:27.019235 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved May 17 00:22:27.019240 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 17 00:22:27.019244 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved May 17 00:22:27.019249 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved May 17 00:22:27.019253 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved May 17 00:22:27.019258 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved May 17 00:22:27.019262 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable May 17 00:22:27.019267 kernel: NX (Execute Disable) protection: active May 17 00:22:27.019272 kernel: APIC: Static calls initialized May 17 00:22:27.019276 kernel: SMBIOS 3.2.1 present. May 17 00:22:27.019281 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 2.6 12/03/2024 May 17 00:22:27.019286 kernel: tsc: Detected 3400.000 MHz processor May 17 00:22:27.019291 kernel: tsc: Detected 3399.906 MHz TSC May 17 00:22:27.019296 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:22:27.019301 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:22:27.019305 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 May 17 00:22:27.019310 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs May 17 00:22:27.019315 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:22:27.019320 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 May 17 00:22:27.019324 kernel: Using GB pages for direct mapping May 17 00:22:27.019330 kernel: ACPI: Early table checksum verification disabled May 17 00:22:27.019335 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) May 17 00:22:27.019340 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) May 17 00:22:27.019347 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) May 17 00:22:27.019352 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) May 17 00:22:27.019357 kernel: ACPI: FACS 0x000000008C66CF80 000040 May 17 00:22:27.019362 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) May 17 00:22:27.019368 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) May 17 00:22:27.019373 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) May 17 00:22:27.019378 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) May 17 00:22:27.019383 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) May 17 00:22:27.019388 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) May 17 00:22:27.019392 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) May 17 00:22:27.019397 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) May 17 00:22:27.019403 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) May 17 00:22:27.019408 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) May 17 00:22:27.019413 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) May 17 00:22:27.019418 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) May 17 00:22:27.019423 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) May 17 00:22:27.019428 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) May 17 00:22:27.019433 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) May 17 00:22:27.019438 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) May 17 00:22:27.019443 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) May 17 00:22:27.019449 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) May 17 00:22:27.019454 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) May 17 00:22:27.019459 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) May 17 00:22:27.019464 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) May 17 00:22:27.019469 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) May 17 00:22:27.019474 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) May 17 00:22:27.019479 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) May 17 00:22:27.019484 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) May 17 00:22:27.019489 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) May 17 00:22:27.019495 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) May 17 00:22:27.019499 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) May 17 00:22:27.019514 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] May 17 00:22:27.019540 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] May 17 00:22:27.019545 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] May 17 00:22:27.019566 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] May 17 00:22:27.019571 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] May 17 00:22:27.019576 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] May 17 00:22:27.019582 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] May 17 00:22:27.019587 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] May 17 00:22:27.019592 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] May 17 00:22:27.019597 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] May 17 00:22:27.019601 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] May 17 00:22:27.019606 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] May 17 00:22:27.019611 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] May 17 00:22:27.019616 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] May 17 00:22:27.019621 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] May 17 00:22:27.019627 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] May 17 00:22:27.019632 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] May 17 00:22:27.019637 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] May 17 00:22:27.019642 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] May 17 00:22:27.019647 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] May 17 00:22:27.019652 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] May 17 00:22:27.019656 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] May 17 00:22:27.019661 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] May 17 00:22:27.019666 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] May 17 00:22:27.019672 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] May 17 00:22:27.019677 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] May 17 00:22:27.019682 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] May 17 00:22:27.019687 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] May 17 00:22:27.019692 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] May 17 00:22:27.019697 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] May 17 00:22:27.019702 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] May 17 00:22:27.019707 kernel: No NUMA configuration found May 17 00:22:27.019712 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] May 17 00:22:27.019717 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] May 17 00:22:27.019723 kernel: Zone ranges: May 17 00:22:27.019728 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:22:27.019733 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 17 00:22:27.019738 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] May 17 00:22:27.019743 kernel: Movable zone start for each node May 17 00:22:27.019748 kernel: Early memory node ranges May 17 00:22:27.019753 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] May 17 00:22:27.019757 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] May 17 00:22:27.019762 kernel: node 0: [mem 0x0000000040400000-0x0000000081b23fff] May 17 00:22:27.019768 kernel: node 0: [mem 0x0000000081b26000-0x000000008afccfff] May 17 00:22:27.019773 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] May 17 00:22:27.019778 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] May 17 00:22:27.019783 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] May 17 00:22:27.019792 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] May 17 00:22:27.019798 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:22:27.019803 kernel: On node 0, zone DMA: 103 pages in unavailable ranges May 17 00:22:27.019808 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges May 17 00:22:27.019815 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges May 17 00:22:27.019820 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges May 17 00:22:27.019825 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges May 17 00:22:27.019831 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges May 17 00:22:27.019836 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges May 17 00:22:27.019842 kernel: ACPI: PM-Timer IO Port: 0x1808 May 17 00:22:27.019847 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) May 17 00:22:27.019852 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) May 17 00:22:27.019858 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) May 17 00:22:27.019864 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) May 17 00:22:27.019869 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) May 17 00:22:27.019874 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) May 17 00:22:27.019880 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) May 17 00:22:27.019885 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) May 17 00:22:27.019890 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) May 17 00:22:27.019895 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) May 17 00:22:27.019901 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) May 17 00:22:27.019906 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) May 17 00:22:27.019912 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) May 17 00:22:27.019917 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) May 17 00:22:27.019923 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) May 17 00:22:27.019928 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) May 17 00:22:27.019933 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 May 17 00:22:27.019939 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 17 00:22:27.019944 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 00:22:27.019949 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:22:27.019955 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 17 00:22:27.019961 kernel: TSC deadline timer available May 17 00:22:27.019966 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs May 17 00:22:27.019971 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices May 17 00:22:27.019977 kernel: Booting paravirtualized kernel on bare hardware May 17 00:22:27.019982 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:22:27.019988 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 May 17 00:22:27.019993 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 May 17 00:22:27.019998 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 May 17 00:22:27.020004 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 May 17 00:22:27.020010 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:22:27.020016 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:22:27.020021 kernel: random: crng init done May 17 00:22:27.020026 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) May 17 00:22:27.020032 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) May 17 00:22:27.020037 kernel: Fallback order for Node 0: 0 May 17 00:22:27.020042 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 May 17 00:22:27.020047 kernel: Policy zone: Normal May 17 00:22:27.020054 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:22:27.020059 kernel: software IO TLB: area num 16. May 17 00:22:27.020065 kernel: Memory: 32720300K/33452980K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42872K init, 2320K bss, 732420K reserved, 0K cma-reserved) May 17 00:22:27.020070 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 May 17 00:22:27.020075 kernel: ftrace: allocating 37948 entries in 149 pages May 17 00:22:27.020081 kernel: ftrace: allocated 149 pages with 4 groups May 17 00:22:27.020086 kernel: Dynamic Preempt: voluntary May 17 00:22:27.020091 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 00:22:27.020097 kernel: rcu: RCU event tracing is enabled. May 17 00:22:27.020103 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. May 17 00:22:27.020109 kernel: Trampoline variant of Tasks RCU enabled. May 17 00:22:27.020114 kernel: Rude variant of Tasks RCU enabled. May 17 00:22:27.020119 kernel: Tracing variant of Tasks RCU enabled. May 17 00:22:27.020125 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:22:27.020130 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 May 17 00:22:27.020135 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 May 17 00:22:27.020141 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 00:22:27.020146 kernel: Console: colour dummy device 80x25 May 17 00:22:27.020151 kernel: printk: console [tty0] enabled May 17 00:22:27.020157 kernel: printk: console [ttyS1] enabled May 17 00:22:27.020163 kernel: ACPI: Core revision 20230628 May 17 00:22:27.020168 kernel: hpet: HPET dysfunctional in PC10. Force disabled. May 17 00:22:27.020174 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:22:27.020179 kernel: DMAR: Host address width 39 May 17 00:22:27.020184 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 May 17 00:22:27.020189 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da May 17 00:22:27.020195 kernel: DMAR: RMRR base: 0x0000008cf19000 end: 0x0000008d162fff May 17 00:22:27.020200 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 May 17 00:22:27.020206 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 May 17 00:22:27.020212 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. May 17 00:22:27.020217 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode May 17 00:22:27.020223 kernel: x2apic enabled May 17 00:22:27.020228 kernel: APIC: Switched APIC routing to: cluster x2apic May 17 00:22:27.020233 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns May 17 00:22:27.020239 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) May 17 00:22:27.020244 kernel: CPU0: Thermal monitoring enabled (TM1) May 17 00:22:27.020250 kernel: process: using mwait in idle threads May 17 00:22:27.020256 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 17 00:22:27.020261 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 17 00:22:27.020266 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:22:27.020271 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit May 17 00:22:27.020276 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall May 17 00:22:27.020282 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS May 17 00:22:27.020287 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT May 17 00:22:27.020292 kernel: RETBleed: Mitigation: Enhanced IBRS May 17 00:22:27.020297 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 17 00:22:27.020303 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 17 00:22:27.020308 kernel: TAA: Mitigation: TSX disabled May 17 00:22:27.020314 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers May 17 00:22:27.020319 kernel: SRBDS: Mitigation: Microcode May 17 00:22:27.020325 kernel: GDS: Mitigation: Microcode May 17 00:22:27.020330 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:22:27.020335 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:22:27.020340 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:22:27.020346 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' May 17 00:22:27.020351 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' May 17 00:22:27.020356 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:22:27.020361 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 May 17 00:22:27.020366 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 May 17 00:22:27.020373 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. May 17 00:22:27.020378 kernel: Freeing SMP alternatives memory: 32K May 17 00:22:27.020383 kernel: pid_max: default: 32768 minimum: 301 May 17 00:22:27.020388 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 17 00:22:27.020394 kernel: landlock: Up and running. May 17 00:22:27.020399 kernel: SELinux: Initializing. May 17 00:22:27.020404 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:22:27.020410 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:22:27.020415 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) May 17 00:22:27.020420 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. May 17 00:22:27.020426 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. May 17 00:22:27.020432 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. May 17 00:22:27.020438 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. May 17 00:22:27.020443 kernel: ... version: 4 May 17 00:22:27.020448 kernel: ... bit width: 48 May 17 00:22:27.020454 kernel: ... generic registers: 4 May 17 00:22:27.020459 kernel: ... value mask: 0000ffffffffffff May 17 00:22:27.020464 kernel: ... max period: 00007fffffffffff May 17 00:22:27.020469 kernel: ... fixed-purpose events: 3 May 17 00:22:27.020475 kernel: ... event mask: 000000070000000f May 17 00:22:27.020481 kernel: signal: max sigframe size: 2032 May 17 00:22:27.020486 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 May 17 00:22:27.020492 kernel: rcu: Hierarchical SRCU implementation. May 17 00:22:27.020497 kernel: rcu: Max phase no-delay instances is 400. May 17 00:22:27.020509 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. May 17 00:22:27.020514 kernel: smp: Bringing up secondary CPUs ... May 17 00:22:27.020538 kernel: smpboot: x86: Booting SMP configuration: May 17 00:22:27.020544 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 May 17 00:22:27.020565 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. May 17 00:22:27.020572 kernel: smp: Brought up 1 node, 16 CPUs May 17 00:22:27.020577 kernel: smpboot: Max logical packages: 1 May 17 00:22:27.020582 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) May 17 00:22:27.020588 kernel: devtmpfs: initialized May 17 00:22:27.020593 kernel: x86/mm: Memory block size: 128MB May 17 00:22:27.020598 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b24000-0x81b24fff] (4096 bytes) May 17 00:22:27.020604 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) May 17 00:22:27.020609 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:22:27.020616 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) May 17 00:22:27.020621 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:22:27.020626 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:22:27.020632 kernel: audit: initializing netlink subsys (disabled) May 17 00:22:27.020637 kernel: audit: type=2000 audit(1747441341.038:1): state=initialized audit_enabled=0 res=1 May 17 00:22:27.020642 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:22:27.020648 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:22:27.020653 kernel: cpuidle: using governor menu May 17 00:22:27.020658 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:22:27.020664 kernel: dca service started, version 1.12.1 May 17 00:22:27.020670 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) May 17 00:22:27.020675 kernel: PCI: Using configuration type 1 for base access May 17 00:22:27.020681 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' May 17 00:22:27.020686 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:22:27.020691 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:22:27.020697 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 17 00:22:27.020702 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:22:27.020707 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 17 00:22:27.020713 kernel: ACPI: Added _OSI(Module Device) May 17 00:22:27.020719 kernel: ACPI: Added _OSI(Processor Device) May 17 00:22:27.020724 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:22:27.020729 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:22:27.020735 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded May 17 00:22:27.020740 kernel: ACPI: Dynamic OEM Table Load: May 17 00:22:27.020745 kernel: ACPI: SSDT 0xFFFF989901607000 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) May 17 00:22:27.020751 kernel: ACPI: Dynamic OEM Table Load: May 17 00:22:27.020756 kernel: ACPI: SSDT 0xFFFF9899015FE000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) May 17 00:22:27.020763 kernel: ACPI: Dynamic OEM Table Load: May 17 00:22:27.020768 kernel: ACPI: SSDT 0xFFFF9899015E5100 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) May 17 00:22:27.020773 kernel: ACPI: Dynamic OEM Table Load: May 17 00:22:27.020779 kernel: ACPI: SSDT 0xFFFF9899015FA000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) May 17 00:22:27.020784 kernel: ACPI: Dynamic OEM Table Load: May 17 00:22:27.020789 kernel: ACPI: SSDT 0xFFFF98990160A000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) May 17 00:22:27.020794 kernel: ACPI: Dynamic OEM Table Load: May 17 00:22:27.020800 kernel: ACPI: SSDT 0xFFFF989901606C00 00030A (v02 PmRef ApCst 00003000 INTL 20160527) May 17 00:22:27.020805 kernel: ACPI: _OSC evaluated successfully for all CPUs May 17 00:22:27.020810 kernel: ACPI: Interpreter enabled May 17 00:22:27.020816 kernel: ACPI: PM: (supports S0 S5) May 17 00:22:27.020822 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:22:27.020827 kernel: HEST: Enabling Firmware First mode for corrected errors. May 17 00:22:27.020833 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. May 17 00:22:27.020838 kernel: HEST: Table parsing has been initialized. May 17 00:22:27.020843 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. May 17 00:22:27.020848 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:22:27.020854 kernel: PCI: Ignoring E820 reservations for host bridge windows May 17 00:22:27.020859 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F May 17 00:22:27.020865 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource May 17 00:22:27.020871 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource May 17 00:22:27.020876 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource May 17 00:22:27.020881 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource May 17 00:22:27.020887 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource May 17 00:22:27.020892 kernel: ACPI: \_TZ_.FN00: New power resource May 17 00:22:27.020897 kernel: ACPI: \_TZ_.FN01: New power resource May 17 00:22:27.020903 kernel: ACPI: \_TZ_.FN02: New power resource May 17 00:22:27.020908 kernel: ACPI: \_TZ_.FN03: New power resource May 17 00:22:27.020914 kernel: ACPI: \_TZ_.FN04: New power resource May 17 00:22:27.020919 kernel: ACPI: \PIN_: New power resource May 17 00:22:27.020925 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) May 17 00:22:27.021022 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:22:27.021110 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] May 17 00:22:27.021158 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] May 17 00:22:27.021166 kernel: PCI host bridge to bus 0000:00 May 17 00:22:27.021220 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 17 00:22:27.021264 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 17 00:22:27.021308 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 17 00:22:27.021350 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] May 17 00:22:27.021393 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] May 17 00:22:27.021434 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] May 17 00:22:27.021493 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 May 17 00:22:27.021595 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 May 17 00:22:27.021647 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold May 17 00:22:27.021700 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 May 17 00:22:27.021750 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] May 17 00:22:27.021801 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 May 17 00:22:27.021850 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] May 17 00:22:27.021906 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 May 17 00:22:27.021956 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] May 17 00:22:27.022003 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold May 17 00:22:27.022056 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 May 17 00:22:27.022104 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] May 17 00:22:27.022152 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] May 17 00:22:27.022207 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 May 17 00:22:27.022256 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] May 17 00:22:27.022311 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 May 17 00:22:27.022359 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] May 17 00:22:27.022411 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 May 17 00:22:27.022460 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] May 17 00:22:27.022514 kernel: pci 0000:00:16.0: PME# supported from D3hot May 17 00:22:27.022601 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 May 17 00:22:27.022651 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] May 17 00:22:27.022707 kernel: pci 0000:00:16.1: PME# supported from D3hot May 17 00:22:27.022761 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 May 17 00:22:27.022809 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] May 17 00:22:27.022858 kernel: pci 0000:00:16.4: PME# supported from D3hot May 17 00:22:27.022912 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 May 17 00:22:27.022961 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] May 17 00:22:27.023010 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] May 17 00:22:27.023058 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] May 17 00:22:27.023107 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] May 17 00:22:27.023154 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] May 17 00:22:27.023202 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] May 17 00:22:27.023252 kernel: pci 0000:00:17.0: PME# supported from D3hot May 17 00:22:27.023306 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 May 17 00:22:27.023357 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold May 17 00:22:27.023414 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 May 17 00:22:27.023465 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold May 17 00:22:27.023540 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 May 17 00:22:27.023603 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold May 17 00:22:27.023659 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 May 17 00:22:27.023708 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold May 17 00:22:27.023761 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 May 17 00:22:27.023812 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold May 17 00:22:27.023865 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 May 17 00:22:27.023914 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] May 17 00:22:27.023967 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 May 17 00:22:27.024020 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 May 17 00:22:27.024068 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] May 17 00:22:27.024119 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] May 17 00:22:27.024174 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 May 17 00:22:27.024223 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] May 17 00:22:27.024277 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 May 17 00:22:27.024329 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] May 17 00:22:27.024378 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] May 17 00:22:27.024428 kernel: pci 0000:01:00.0: PME# supported from D3cold May 17 00:22:27.024481 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] May 17 00:22:27.024577 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) May 17 00:22:27.024634 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 May 17 00:22:27.024684 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] May 17 00:22:27.024735 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] May 17 00:22:27.024784 kernel: pci 0000:01:00.1: PME# supported from D3cold May 17 00:22:27.024834 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] May 17 00:22:27.024887 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) May 17 00:22:27.024937 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 17 00:22:27.024986 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] May 17 00:22:27.025034 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] May 17 00:22:27.025084 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] May 17 00:22:27.025137 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect May 17 00:22:27.025188 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 May 17 00:22:27.025241 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] May 17 00:22:27.025291 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] May 17 00:22:27.025341 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] May 17 00:22:27.025391 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold May 17 00:22:27.025439 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] May 17 00:22:27.025488 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] May 17 00:22:27.025574 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] May 17 00:22:27.025634 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect May 17 00:22:27.025721 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 May 17 00:22:27.025771 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] May 17 00:22:27.025823 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] May 17 00:22:27.025872 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] May 17 00:22:27.025923 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold May 17 00:22:27.025972 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] May 17 00:22:27.026022 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] May 17 00:22:27.026073 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] May 17 00:22:27.026122 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] May 17 00:22:27.026176 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 May 17 00:22:27.026227 kernel: pci 0000:06:00.0: enabling Extended Tags May 17 00:22:27.026277 kernel: pci 0000:06:00.0: supports D1 D2 May 17 00:22:27.026327 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 17 00:22:27.026377 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] May 17 00:22:27.026428 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] May 17 00:22:27.026477 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] May 17 00:22:27.026579 kernel: pci_bus 0000:07: extended config space not accessible May 17 00:22:27.026638 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 May 17 00:22:27.026691 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] May 17 00:22:27.026742 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] May 17 00:22:27.026795 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] May 17 00:22:27.026848 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 17 00:22:27.026900 kernel: pci 0000:07:00.0: supports D1 D2 May 17 00:22:27.026952 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 17 00:22:27.027002 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] May 17 00:22:27.027052 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] May 17 00:22:27.027102 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] May 17 00:22:27.027110 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 May 17 00:22:27.027116 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 May 17 00:22:27.027125 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 May 17 00:22:27.027131 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 May 17 00:22:27.027136 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 May 17 00:22:27.027142 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 May 17 00:22:27.027148 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 May 17 00:22:27.027153 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 May 17 00:22:27.027159 kernel: iommu: Default domain type: Translated May 17 00:22:27.027165 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:22:27.027170 kernel: PCI: Using ACPI for IRQ routing May 17 00:22:27.027177 kernel: PCI: pci_cache_line_size set to 64 bytes May 17 00:22:27.027183 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] May 17 00:22:27.027188 kernel: e820: reserve RAM buffer [mem 0x81b24000-0x83ffffff] May 17 00:22:27.027194 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] May 17 00:22:27.027199 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] May 17 00:22:27.027205 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] May 17 00:22:27.027210 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] May 17 00:22:27.027262 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device May 17 00:22:27.027314 kernel: pci 0000:07:00.0: vgaarb: bridge control possible May 17 00:22:27.027368 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 17 00:22:27.027377 kernel: vgaarb: loaded May 17 00:22:27.027383 kernel: clocksource: Switched to clocksource tsc-early May 17 00:22:27.027388 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:22:27.027394 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:22:27.027400 kernel: pnp: PnP ACPI init May 17 00:22:27.027450 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved May 17 00:22:27.027499 kernel: pnp 00:02: [dma 0 disabled] May 17 00:22:27.027597 kernel: pnp 00:03: [dma 0 disabled] May 17 00:22:27.027647 kernel: system 00:04: [io 0x0680-0x069f] has been reserved May 17 00:22:27.027694 kernel: system 00:04: [io 0x164e-0x164f] has been reserved May 17 00:22:27.027741 kernel: system 00:05: [io 0x1854-0x1857] has been reserved May 17 00:22:27.027790 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved May 17 00:22:27.027834 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved May 17 00:22:27.027881 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved May 17 00:22:27.027925 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved May 17 00:22:27.027973 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved May 17 00:22:27.028017 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved May 17 00:22:27.028064 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved May 17 00:22:27.028110 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved May 17 00:22:27.028158 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved May 17 00:22:27.028206 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved May 17 00:22:27.028249 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved May 17 00:22:27.028294 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved May 17 00:22:27.028337 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved May 17 00:22:27.028382 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved May 17 00:22:27.028426 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved May 17 00:22:27.028474 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved May 17 00:22:27.028484 kernel: pnp: PnP ACPI: found 10 devices May 17 00:22:27.028490 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:22:27.028496 kernel: NET: Registered PF_INET protocol family May 17 00:22:27.028504 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:22:27.028510 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) May 17 00:22:27.028516 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:22:27.028522 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:22:27.028553 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 17 00:22:27.028560 kernel: TCP: Hash tables configured (established 262144 bind 65536) May 17 00:22:27.028582 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) May 17 00:22:27.028588 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) May 17 00:22:27.028593 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:22:27.028599 kernel: NET: Registered PF_XDP protocol family May 17 00:22:27.028649 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] May 17 00:22:27.028699 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] May 17 00:22:27.028748 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] May 17 00:22:27.028799 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] May 17 00:22:27.028851 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] May 17 00:22:27.028902 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] May 17 00:22:27.028953 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] May 17 00:22:27.029002 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 17 00:22:27.029052 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] May 17 00:22:27.029100 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] May 17 00:22:27.029150 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] May 17 00:22:27.029201 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] May 17 00:22:27.029250 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] May 17 00:22:27.029297 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] May 17 00:22:27.029346 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] May 17 00:22:27.029395 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] May 17 00:22:27.029446 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] May 17 00:22:27.029494 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] May 17 00:22:27.029595 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] May 17 00:22:27.029646 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] May 17 00:22:27.029694 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] May 17 00:22:27.029743 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] May 17 00:22:27.029791 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] May 17 00:22:27.029840 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] May 17 00:22:27.029887 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc May 17 00:22:27.029933 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 17 00:22:27.029976 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 17 00:22:27.030020 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 17 00:22:27.030062 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] May 17 00:22:27.030105 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] May 17 00:22:27.030153 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] May 17 00:22:27.030199 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] May 17 00:22:27.030252 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] May 17 00:22:27.030297 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] May 17 00:22:27.030345 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] May 17 00:22:27.030391 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] May 17 00:22:27.030439 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] May 17 00:22:27.030485 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] May 17 00:22:27.030579 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] May 17 00:22:27.030627 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] May 17 00:22:27.030636 kernel: PCI: CLS 64 bytes, default 64 May 17 00:22:27.030642 kernel: DMAR: No ATSR found May 17 00:22:27.030647 kernel: DMAR: No SATC found May 17 00:22:27.030653 kernel: DMAR: dmar0: Using Queued invalidation May 17 00:22:27.030702 kernel: pci 0000:00:00.0: Adding to iommu group 0 May 17 00:22:27.030750 kernel: pci 0000:00:01.0: Adding to iommu group 1 May 17 00:22:27.030799 kernel: pci 0000:00:08.0: Adding to iommu group 2 May 17 00:22:27.030850 kernel: pci 0000:00:12.0: Adding to iommu group 3 May 17 00:22:27.030898 kernel: pci 0000:00:14.0: Adding to iommu group 4 May 17 00:22:27.030945 kernel: pci 0000:00:14.2: Adding to iommu group 4 May 17 00:22:27.030995 kernel: pci 0000:00:15.0: Adding to iommu group 5 May 17 00:22:27.031041 kernel: pci 0000:00:15.1: Adding to iommu group 5 May 17 00:22:27.031090 kernel: pci 0000:00:16.0: Adding to iommu group 6 May 17 00:22:27.031137 kernel: pci 0000:00:16.1: Adding to iommu group 6 May 17 00:22:27.031185 kernel: pci 0000:00:16.4: Adding to iommu group 6 May 17 00:22:27.031234 kernel: pci 0000:00:17.0: Adding to iommu group 7 May 17 00:22:27.031283 kernel: pci 0000:00:1b.0: Adding to iommu group 8 May 17 00:22:27.031332 kernel: pci 0000:00:1b.4: Adding to iommu group 9 May 17 00:22:27.031380 kernel: pci 0000:00:1b.5: Adding to iommu group 10 May 17 00:22:27.031430 kernel: pci 0000:00:1c.0: Adding to iommu group 11 May 17 00:22:27.031477 kernel: pci 0000:00:1c.3: Adding to iommu group 12 May 17 00:22:27.031575 kernel: pci 0000:00:1e.0: Adding to iommu group 13 May 17 00:22:27.031623 kernel: pci 0000:00:1f.0: Adding to iommu group 14 May 17 00:22:27.031674 kernel: pci 0000:00:1f.4: Adding to iommu group 14 May 17 00:22:27.031722 kernel: pci 0000:00:1f.5: Adding to iommu group 14 May 17 00:22:27.031773 kernel: pci 0000:01:00.0: Adding to iommu group 1 May 17 00:22:27.031821 kernel: pci 0000:01:00.1: Adding to iommu group 1 May 17 00:22:27.031871 kernel: pci 0000:03:00.0: Adding to iommu group 15 May 17 00:22:27.031922 kernel: pci 0000:04:00.0: Adding to iommu group 16 May 17 00:22:27.031972 kernel: pci 0000:06:00.0: Adding to iommu group 17 May 17 00:22:27.032026 kernel: pci 0000:07:00.0: Adding to iommu group 17 May 17 00:22:27.032036 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O May 17 00:22:27.032042 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 17 00:22:27.032048 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) May 17 00:22:27.032054 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer May 17 00:22:27.032059 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules May 17 00:22:27.032065 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules May 17 00:22:27.032071 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules May 17 00:22:27.032121 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) May 17 00:22:27.032132 kernel: Initialise system trusted keyrings May 17 00:22:27.032137 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 May 17 00:22:27.032143 kernel: Key type asymmetric registered May 17 00:22:27.032149 kernel: Asymmetric key parser 'x509' registered May 17 00:22:27.032154 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 17 00:22:27.032160 kernel: io scheduler mq-deadline registered May 17 00:22:27.032166 kernel: io scheduler kyber registered May 17 00:22:27.032171 kernel: io scheduler bfq registered May 17 00:22:27.032220 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 May 17 00:22:27.032268 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 May 17 00:22:27.032319 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 May 17 00:22:27.032367 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 May 17 00:22:27.032417 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 May 17 00:22:27.032465 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 May 17 00:22:27.032547 kernel: thermal LNXTHERM:00: registered as thermal_zone0 May 17 00:22:27.032576 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) May 17 00:22:27.032582 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. May 17 00:22:27.032589 kernel: pstore: Using crash dump compression: deflate May 17 00:22:27.032595 kernel: pstore: Registered erst as persistent store backend May 17 00:22:27.032601 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:22:27.032606 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:22:27.032612 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:22:27.032618 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A May 17 00:22:27.032624 kernel: hpet_acpi_add: no address or irqs in _CRS May 17 00:22:27.032675 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) May 17 00:22:27.032685 kernel: i8042: PNP: No PS/2 controller found. May 17 00:22:27.032730 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 May 17 00:22:27.032776 kernel: rtc_cmos rtc_cmos: registered as rtc0 May 17 00:22:27.032820 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-05-17T00:22:25 UTC (1747441345) May 17 00:22:27.032865 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram May 17 00:22:27.032874 kernel: intel_pstate: Intel P-state driver initializing May 17 00:22:27.032880 kernel: intel_pstate: Disabling energy efficiency optimization May 17 00:22:27.032886 kernel: intel_pstate: HWP enabled May 17 00:22:27.032893 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 May 17 00:22:27.032899 kernel: vesafb: scrolling: redraw May 17 00:22:27.032904 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 May 17 00:22:27.032910 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x00000000a636576d, using 768k, total 768k May 17 00:22:27.032916 kernel: Console: switching to colour frame buffer device 128x48 May 17 00:22:27.032922 kernel: fb0: VESA VGA frame buffer device May 17 00:22:27.032927 kernel: NET: Registered PF_INET6 protocol family May 17 00:22:27.032933 kernel: Segment Routing with IPv6 May 17 00:22:27.032938 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:22:27.032945 kernel: NET: Registered PF_PACKET protocol family May 17 00:22:27.032951 kernel: Key type dns_resolver registered May 17 00:22:27.032956 kernel: microcode: Current revision: 0x00000102 May 17 00:22:27.032962 kernel: microcode: Microcode Update Driver: v2.2. May 17 00:22:27.032968 kernel: IPI shorthand broadcast: enabled May 17 00:22:27.032973 kernel: sched_clock: Marking stable (2481165988, 1379250713)->(4396269308, -535852607) May 17 00:22:27.032979 kernel: registered taskstats version 1 May 17 00:22:27.032985 kernel: Loading compiled-in X.509 certificates May 17 00:22:27.032990 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 85b8d1234ceca483cb3defc2030d93f7792663c9' May 17 00:22:27.032997 kernel: Key type .fscrypt registered May 17 00:22:27.033003 kernel: Key type fscrypt-provisioning registered May 17 00:22:27.033008 kernel: ima: Allocated hash algorithm: sha1 May 17 00:22:27.033014 kernel: ima: No architecture policies found May 17 00:22:27.033020 kernel: clk: Disabling unused clocks May 17 00:22:27.033025 kernel: Freeing unused kernel image (initmem) memory: 42872K May 17 00:22:27.033031 kernel: Write protecting the kernel read-only data: 36864k May 17 00:22:27.033037 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 17 00:22:27.033042 kernel: Run /init as init process May 17 00:22:27.033049 kernel: with arguments: May 17 00:22:27.033055 kernel: /init May 17 00:22:27.033060 kernel: with environment: May 17 00:22:27.033066 kernel: HOME=/ May 17 00:22:27.033071 kernel: TERM=linux May 17 00:22:27.033077 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:22:27.033083 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:22:27.033091 systemd[1]: Detected architecture x86-64. May 17 00:22:27.033098 systemd[1]: Running in initrd. May 17 00:22:27.033104 systemd[1]: No hostname configured, using default hostname. May 17 00:22:27.033110 systemd[1]: Hostname set to . May 17 00:22:27.033115 systemd[1]: Initializing machine ID from random generator. May 17 00:22:27.033121 systemd[1]: Queued start job for default target initrd.target. May 17 00:22:27.033127 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:22:27.033133 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:22:27.033140 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 00:22:27.033147 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:22:27.033153 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 00:22:27.033159 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 00:22:27.033165 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 00:22:27.033171 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz May 17 00:22:27.033177 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 00:22:27.033184 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns May 17 00:22:27.033190 kernel: clocksource: Switched to clocksource tsc May 17 00:22:27.033196 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:22:27.033202 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:22:27.033208 systemd[1]: Reached target paths.target - Path Units. May 17 00:22:27.033214 systemd[1]: Reached target slices.target - Slice Units. May 17 00:22:27.033220 systemd[1]: Reached target swap.target - Swaps. May 17 00:22:27.033226 systemd[1]: Reached target timers.target - Timer Units. May 17 00:22:27.033232 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:22:27.033239 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:22:27.033245 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:22:27.033251 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:22:27.033257 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:22:27.033263 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:22:27.033269 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:22:27.033275 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:22:27.033281 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 00:22:27.033287 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:22:27.033294 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 00:22:27.033300 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:22:27.033306 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:22:27.033322 systemd-journald[267]: Collecting audit messages is disabled. May 17 00:22:27.033337 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:22:27.033343 systemd-journald[267]: Journal started May 17 00:22:27.033357 systemd-journald[267]: Runtime Journal (/run/log/journal/bfa758022144482cb83e6a737662a334) is 8.0M, max 639.9M, 631.9M free. May 17 00:22:27.047327 systemd-modules-load[269]: Inserted module 'overlay' May 17 00:22:27.075613 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:22:27.121529 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:22:27.121548 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:22:27.140754 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 00:22:27.140845 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:22:27.140936 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:22:27.159393 systemd-modules-load[269]: Inserted module 'br_netfilter' May 17 00:22:27.159507 kernel: Bridge firewalling registered May 17 00:22:27.164867 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:22:27.212614 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:22:27.229083 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:22:27.260974 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:22:27.282861 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:22:27.304048 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:22:27.344921 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:22:27.357717 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:22:27.359405 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:22:27.366945 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:22:27.367099 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:22:27.368161 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:22:27.384773 systemd-resolved[306]: Positive Trust Anchors: May 17 00:22:27.384777 systemd-resolved[306]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:22:27.384801 systemd-resolved[306]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:22:27.386320 systemd-resolved[306]: Defaulting to hostname 'linux'. May 17 00:22:27.389933 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:22:27.396881 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:22:27.429851 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:22:27.503663 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 00:22:27.576770 dracut-cmdline[310]: dracut-dracut-053 May 17 00:22:27.583738 dracut-cmdline[310]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:22:27.783549 kernel: SCSI subsystem initialized May 17 00:22:27.805544 kernel: Loading iSCSI transport class v2.0-870. May 17 00:22:27.829548 kernel: iscsi: registered transport (tcp) May 17 00:22:27.860639 kernel: iscsi: registered transport (qla4xxx) May 17 00:22:27.860656 kernel: QLogic iSCSI HBA Driver May 17 00:22:27.893704 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 00:22:27.921858 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 00:22:27.979639 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:22:27.979660 kernel: device-mapper: uevent: version 1.0.3 May 17 00:22:27.999444 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 17 00:22:28.058509 kernel: raid6: avx2x4 gen() 51999 MB/s May 17 00:22:28.090508 kernel: raid6: avx2x2 gen() 52677 MB/s May 17 00:22:28.126959 kernel: raid6: avx2x1 gen() 44087 MB/s May 17 00:22:28.126994 kernel: raid6: using algorithm avx2x2 gen() 52677 MB/s May 17 00:22:28.175016 kernel: raid6: .... xor() 31242 MB/s, rmw enabled May 17 00:22:28.175050 kernel: raid6: using avx2x2 recovery algorithm May 17 00:22:28.216509 kernel: xor: automatically using best checksumming function avx May 17 00:22:28.330513 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 00:22:28.336093 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 00:22:28.366816 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:22:28.373921 systemd-udevd[494]: Using default interface naming scheme 'v255'. May 17 00:22:28.376376 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:22:28.408703 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 00:22:28.454849 dracut-pre-trigger[506]: rd.md=0: removing MD RAID activation May 17 00:22:28.472721 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:22:28.505944 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:22:28.567532 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:22:28.600370 kernel: pps_core: LinuxPPS API ver. 1 registered May 17 00:22:28.600385 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 17 00:22:28.611510 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:22:28.613674 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 00:22:28.629134 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:22:28.656055 kernel: PTP clock support registered May 17 00:22:28.656071 kernel: ACPI: bus type USB registered May 17 00:22:28.656080 kernel: usbcore: registered new interface driver usbfs May 17 00:22:28.629250 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:22:28.718075 kernel: usbcore: registered new interface driver hub May 17 00:22:28.718099 kernel: usbcore: registered new device driver usb May 17 00:22:28.718109 kernel: libata version 3.00 loaded. May 17 00:22:28.718119 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:22:28.718128 kernel: ahci 0000:00:17.0: version 3.0 May 17 00:22:28.732406 kernel: AES CTR mode by8 optimization enabled May 17 00:22:28.732424 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode May 17 00:22:28.760511 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst May 17 00:22:28.782271 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:22:28.796355 kernel: scsi host0: ahci May 17 00:22:28.796448 kernel: scsi host1: ahci May 17 00:22:28.810397 kernel: scsi host2: ahci May 17 00:22:28.819614 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:22:28.819990 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:22:28.848323 kernel: scsi host3: ahci May 17 00:22:28.848415 kernel: scsi host4: ahci May 17 00:22:28.848479 kernel: scsi host5: ahci May 17 00:22:28.844693 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:22:28.986869 kernel: scsi host6: ahci May 17 00:22:28.986996 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 May 17 00:22:28.987013 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 May 17 00:22:28.987028 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 May 17 00:22:28.987040 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 May 17 00:22:28.987052 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 May 17 00:22:28.987067 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 May 17 00:22:28.987081 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 May 17 00:22:28.999348 kernel: igb: Intel(R) Gigabit Ethernet Network Driver May 17 00:22:28.999367 kernel: mlx5_core 0000:01:00.0: firmware version: 14.28.2006 May 17 00:22:28.999455 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. May 17 00:22:28.999464 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) May 17 00:22:29.045769 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:22:29.117439 kernel: igb 0000:03:00.0: added PHC on eth0 May 17 00:22:29.117538 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection May 17 00:22:29.117616 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:d8:7a May 17 00:22:29.117683 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 May 17 00:22:29.117747 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) May 17 00:22:29.117762 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 00:22:29.119359 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:22:29.119382 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:22:29.119407 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:22:29.119826 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 00:22:29.240603 kernel: igb 0000:04:00.0: added PHC on eth1 May 17 00:22:29.240694 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection May 17 00:22:29.240764 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:d8:7b May 17 00:22:29.240829 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 May 17 00:22:29.240893 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) May 17 00:22:29.120256 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:22:29.120306 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:22:29.257683 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:22:29.295617 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:22:29.450004 kernel: ata3: SATA link down (SStatus 0 SControl 300) May 17 00:22:29.450020 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) May 17 00:22:29.450121 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) May 17 00:22:29.450130 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged May 17 00:22:29.450199 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 17 00:22:29.450208 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 17 00:22:29.450215 kernel: ata7: SATA link down (SStatus 0 SControl 300) May 17 00:22:29.450222 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 17 00:22:29.450229 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) May 17 00:22:29.450237 kernel: ata1.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 May 17 00:22:29.450244 kernel: ata2.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 May 17 00:22:29.407324 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 00:22:29.521427 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA May 17 00:22:29.521440 kernel: ata1.00: Features: NCQ-prio May 17 00:22:29.521448 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA May 17 00:22:29.521455 kernel: ata2.00: Features: NCQ-prio May 17 00:22:29.521462 kernel: ata1.00: configured for UDMA/133 May 17 00:22:29.521469 kernel: ata2.00: configured for UDMA/133 May 17 00:22:29.521476 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 May 17 00:22:29.529523 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 17 00:22:29.529672 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:22:29.590261 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 May 17 00:22:29.590356 kernel: mlx5_core 0000:01:00.1: firmware version: 14.28.2006 May 17 00:22:29.590440 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) May 17 00:22:29.607534 kernel: igb 0000:04:00.0 eno2: renamed from eth1 May 17 00:22:29.642943 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller May 17 00:22:29.643071 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 May 17 00:22:29.666679 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 May 17 00:22:29.666808 kernel: igb 0000:03:00.0 eno1: renamed from eth0 May 17 00:22:29.666915 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller May 17 00:22:29.705654 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 May 17 00:22:29.721771 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed May 17 00:22:29.734884 kernel: hub 1-0:1.0: USB hub found May 17 00:22:29.748322 kernel: hub 1-0:1.0: 16 ports detected May 17 00:22:29.772514 kernel: hub 2-0:1.0: USB hub found May 17 00:22:29.772634 kernel: hub 2-0:1.0: 10 ports detected May 17 00:22:29.793495 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:22:30.279847 kernel: ata1.00: Enabling discard_zeroes_data May 17 00:22:30.279864 kernel: ata2.00: Enabling discard_zeroes_data May 17 00:22:30.279872 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) May 17 00:22:30.279963 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) May 17 00:22:30.280033 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks May 17 00:22:30.280097 kernel: sd 1:0:0:0: [sda] Write Protect is off May 17 00:22:30.280159 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 May 17 00:22:30.280220 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 17 00:22:30.280285 kernel: sd 1:0:0:0: [sda] Preferred minimum I/O size 4096 bytes May 17 00:22:30.280348 kernel: ata2.00: Enabling discard_zeroes_data May 17 00:22:30.280357 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:22:30.280364 kernel: GPT:9289727 != 937703087 May 17 00:22:30.280372 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:22:30.280379 kernel: GPT:9289727 != 937703087 May 17 00:22:30.280386 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:22:30.280393 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:22:30.280400 kernel: sd 1:0:0:0: [sda] Attached SCSI disk May 17 00:22:30.280461 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) May 17 00:22:30.280540 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks May 17 00:22:30.280604 kernel: sd 0:0:0:0: [sdb] Write Protect is off May 17 00:22:30.280666 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged May 17 00:22:30.280732 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 May 17 00:22:30.280794 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd May 17 00:22:30.280899 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 17 00:22:30.280966 kernel: sd 0:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes May 17 00:22:30.281028 kernel: ata1.00: Enabling discard_zeroes_data May 17 00:22:30.281037 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 17 00:22:30.281104 kernel: hub 1-14:1.0: USB hub found May 17 00:22:30.281182 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk May 17 00:22:30.281245 kernel: hub 1-14:1.0: 4 ports detected May 17 00:22:30.281317 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth2 May 17 00:22:30.281386 kernel: BTRFS: device fsid 7f88d479-6686-439c-8052-b96f0a9d77bc devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (561) May 17 00:22:30.281395 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (694) May 17 00:22:30.271842 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5200_MTFDDAK480TDN EFI-SYSTEM. May 17 00:22:30.312639 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth0 May 17 00:22:30.303214 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5200_MTFDDAK480TDN ROOT. May 17 00:22:30.329807 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5200_MTFDDAK480TDN USR-A. May 17 00:22:30.334643 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5200_MTFDDAK480TDN USR-A. May 17 00:22:30.378040 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5200_MTFDDAK480TDN OEM. May 17 00:22:30.416852 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 00:22:30.456621 kernel: ata2.00: Enabling discard_zeroes_data May 17 00:22:30.456635 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:22:30.456643 disk-uuid[733]: Primary Header is updated. May 17 00:22:30.456643 disk-uuid[733]: Secondary Entries is updated. May 17 00:22:30.456643 disk-uuid[733]: Secondary Header is updated. May 17 00:22:30.539582 kernel: ata2.00: Enabling discard_zeroes_data May 17 00:22:30.539595 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd May 17 00:22:30.539619 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:22:30.539627 kernel: ata2.00: Enabling discard_zeroes_data May 17 00:22:30.539633 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:22:30.608510 kernel: hid: raw HID events driver (C) Jiri Kosina May 17 00:22:30.630937 kernel: usbcore: registered new interface driver usbhid May 17 00:22:30.630968 kernel: usbhid: USB HID core driver May 17 00:22:30.674513 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 May 17 00:22:30.760515 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 May 17 00:22:30.760834 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 May 17 00:22:30.831507 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 May 17 00:22:31.528182 kernel: ata2.00: Enabling discard_zeroes_data May 17 00:22:31.548523 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:22:31.548859 disk-uuid[734]: The operation has completed successfully. May 17 00:22:31.584838 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:22:31.584902 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 00:22:31.634877 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 00:22:31.673608 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 17 00:22:31.673624 sh[751]: Success May 17 00:22:31.703452 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 00:22:31.726847 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 00:22:31.734814 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 00:22:31.787342 kernel: BTRFS info (device dm-0): first mount of filesystem 7f88d479-6686-439c-8052-b96f0a9d77bc May 17 00:22:31.787363 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 17 00:22:31.809442 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 17 00:22:31.829118 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 17 00:22:31.847654 kernel: BTRFS info (device dm-0): using free space tree May 17 00:22:31.886543 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 17 00:22:31.888398 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 00:22:31.897987 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 00:22:31.908788 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 00:22:32.045851 kernel: BTRFS info (device sda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:22:32.045866 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:22:32.045874 kernel: BTRFS info (device sda6): using free space tree May 17 00:22:32.045881 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:22:32.045888 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:22:32.045894 kernel: BTRFS info (device sda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:22:32.051810 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 00:22:32.052174 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 00:22:32.099747 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 00:22:32.109753 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:22:32.164735 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:22:32.179647 systemd-networkd[935]: lo: Link UP May 17 00:22:32.176769 ignition[902]: Ignition 2.19.0 May 17 00:22:32.179650 systemd-networkd[935]: lo: Gained carrier May 17 00:22:32.176775 ignition[902]: Stage: fetch-offline May 17 00:22:32.179865 unknown[902]: fetched base config from "system" May 17 00:22:32.176812 ignition[902]: no configs at "/usr/lib/ignition/base.d" May 17 00:22:32.179871 unknown[902]: fetched user config from "system" May 17 00:22:32.176820 ignition[902]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:22:32.180968 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:22:32.176895 ignition[902]: parsed url from cmdline: "" May 17 00:22:32.182750 systemd-networkd[935]: Enumeration completed May 17 00:22:32.176897 ignition[902]: no config URL provided May 17 00:22:32.183662 systemd-networkd[935]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:22:32.176902 ignition[902]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:22:32.196737 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:22:32.176933 ignition[902]: parsing config with SHA512: a516ed255eb6a311a751fb80b36575474f66475f7b413d09e73d6c631486ff69192ffcc8df7056f73fb064cce03ffd15f4ed410cca79cedb63ea7d83faa299c0 May 17 00:22:32.211351 systemd-networkd[935]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:22:32.180196 ignition[902]: fetch-offline: fetch-offline passed May 17 00:22:32.214977 systemd[1]: Reached target network.target - Network. May 17 00:22:32.180199 ignition[902]: POST message to Packet Timeline May 17 00:22:32.219748 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 17 00:22:32.180203 ignition[902]: POST Status error: resource requires networking May 17 00:22:32.416601 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up May 17 00:22:32.238723 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 00:22:32.180248 ignition[902]: Ignition finished successfully May 17 00:22:32.239935 systemd-networkd[935]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:22:32.250371 ignition[950]: Ignition 2.19.0 May 17 00:22:32.415292 systemd-networkd[935]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:22:32.250378 ignition[950]: Stage: kargs May 17 00:22:32.250590 ignition[950]: no configs at "/usr/lib/ignition/base.d" May 17 00:22:32.250603 ignition[950]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:22:32.251762 ignition[950]: kargs: kargs passed May 17 00:22:32.251767 ignition[950]: POST message to Packet Timeline May 17 00:22:32.251783 ignition[950]: GET https://metadata.packet.net/metadata: attempt #1 May 17 00:22:32.252529 ignition[950]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:51280->[::1]:53: read: connection refused May 17 00:22:32.452827 ignition[950]: GET https://metadata.packet.net/metadata: attempt #2 May 17 00:22:32.453268 ignition[950]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:42107->[::1]:53: read: connection refused May 17 00:22:32.606600 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up May 17 00:22:32.607650 systemd-networkd[935]: eno1: Link UP May 17 00:22:32.607972 systemd-networkd[935]: eno2: Link UP May 17 00:22:32.608154 systemd-networkd[935]: enp1s0f0np0: Link UP May 17 00:22:32.608360 systemd-networkd[935]: enp1s0f0np0: Gained carrier May 17 00:22:32.617792 systemd-networkd[935]: enp1s0f1np1: Link UP May 17 00:22:32.648685 systemd-networkd[935]: enp1s0f0np0: DHCPv4 address 147.75.202.203/31, gateway 147.75.202.202 acquired from 145.40.83.140 May 17 00:22:32.853622 ignition[950]: GET https://metadata.packet.net/metadata: attempt #3 May 17 00:22:32.854670 ignition[950]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:47383->[::1]:53: read: connection refused May 17 00:22:33.434314 systemd-networkd[935]: enp1s0f1np1: Gained carrier May 17 00:22:33.626142 systemd-networkd[935]: enp1s0f0np0: Gained IPv6LL May 17 00:22:33.655204 ignition[950]: GET https://metadata.packet.net/metadata: attempt #4 May 17 00:22:33.656383 ignition[950]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:35019->[::1]:53: read: connection refused May 17 00:22:35.256670 ignition[950]: GET https://metadata.packet.net/metadata: attempt #5 May 17 00:22:35.257811 ignition[950]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:33333->[::1]:53: read: connection refused May 17 00:22:35.289810 systemd-networkd[935]: enp1s0f1np1: Gained IPv6LL May 17 00:22:38.460474 ignition[950]: GET https://metadata.packet.net/metadata: attempt #6 May 17 00:22:39.437024 ignition[950]: GET result: OK May 17 00:22:40.006912 ignition[950]: Ignition finished successfully May 17 00:22:40.011794 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 00:22:40.044771 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 00:22:40.051674 ignition[968]: Ignition 2.19.0 May 17 00:22:40.051679 ignition[968]: Stage: disks May 17 00:22:40.051798 ignition[968]: no configs at "/usr/lib/ignition/base.d" May 17 00:22:40.051805 ignition[968]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:22:40.052445 ignition[968]: disks: disks passed May 17 00:22:40.052448 ignition[968]: POST message to Packet Timeline May 17 00:22:40.052458 ignition[968]: GET https://metadata.packet.net/metadata: attempt #1 May 17 00:22:40.872014 ignition[968]: GET result: OK May 17 00:22:41.215654 ignition[968]: Ignition finished successfully May 17 00:22:41.218674 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 00:22:41.233260 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 00:22:41.251819 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:22:41.272736 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:22:41.293917 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:22:41.304073 systemd[1]: Reached target basic.target - Basic System. May 17 00:22:41.339758 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 00:22:41.374812 systemd-fsck[984]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 17 00:22:41.386085 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 00:22:41.406752 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 00:22:41.508557 kernel: EXT4-fs (sda9): mounted filesystem 278698a4-82b6-49b4-b6df-f7999ed4e35e r/w with ordered data mode. Quota mode: none. May 17 00:22:41.509099 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 00:22:41.518989 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 00:22:41.542566 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:22:41.562137 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 00:22:41.607621 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (993) May 17 00:22:41.607635 kernel: BTRFS info (device sda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:22:41.576809 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 17 00:22:41.670592 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:22:41.670604 kernel: BTRFS info (device sda6): using free space tree May 17 00:22:41.670611 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:22:41.675555 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:22:41.712810 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... May 17 00:22:41.712943 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:22:41.712959 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:22:41.743542 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:22:41.794677 coreos-metadata[1011]: May 17 00:22:41.761 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 00:22:41.796827 coreos-metadata[995]: May 17 00:22:41.761 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 00:22:41.761802 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 00:22:41.796762 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 00:22:41.846686 initrd-setup-root[1025]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:22:41.856616 initrd-setup-root[1032]: cut: /sysroot/etc/group: No such file or directory May 17 00:22:41.866630 initrd-setup-root[1039]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:22:41.876623 initrd-setup-root[1046]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:22:41.878725 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 00:22:41.914717 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 00:22:41.933586 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 00:22:41.958819 kernel: BTRFS info (device sda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:22:41.959340 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 00:22:41.981112 ignition[1113]: INFO : Ignition 2.19.0 May 17 00:22:41.981112 ignition[1113]: INFO : Stage: mount May 17 00:22:41.996601 ignition[1113]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:22:41.996601 ignition[1113]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:22:41.996601 ignition[1113]: INFO : mount: mount passed May 17 00:22:41.996601 ignition[1113]: INFO : POST message to Packet Timeline May 17 00:22:41.996601 ignition[1113]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 17 00:22:41.991710 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 00:22:42.709970 coreos-metadata[1011]: May 17 00:22:42.709 INFO Fetch successful May 17 00:22:42.778106 coreos-metadata[995]: May 17 00:22:42.778 INFO Fetch successful May 17 00:22:42.790585 systemd[1]: flatcar-static-network.service: Deactivated successfully. May 17 00:22:42.790646 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. May 17 00:22:42.811673 coreos-metadata[995]: May 17 00:22:42.809 INFO wrote hostname ci-4081.3.3-n-750554c5a6 to /sysroot/etc/hostname May 17 00:22:42.810964 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 00:22:42.936739 ignition[1113]: INFO : GET result: OK May 17 00:22:43.268165 ignition[1113]: INFO : Ignition finished successfully May 17 00:22:43.270564 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 00:22:43.305807 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 00:22:43.325040 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:22:43.390656 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1139) May 17 00:22:43.390685 kernel: BTRFS info (device sda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:22:43.411201 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:22:43.429462 kernel: BTRFS info (device sda6): using free space tree May 17 00:22:43.469432 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:22:43.469456 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:22:43.483596 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:22:43.509966 ignition[1156]: INFO : Ignition 2.19.0 May 17 00:22:43.509966 ignition[1156]: INFO : Stage: files May 17 00:22:43.524789 ignition[1156]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:22:43.524789 ignition[1156]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:22:43.524789 ignition[1156]: DEBUG : files: compiled without relabeling support, skipping May 17 00:22:43.524789 ignition[1156]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:22:43.524789 ignition[1156]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:22:43.524789 ignition[1156]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:22:43.524789 ignition[1156]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:22:43.524789 ignition[1156]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:22:43.524789 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 17 00:22:43.524789 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 17 00:22:43.524789 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:22:43.524789 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 17 00:22:43.513924 unknown[1156]: wrote ssh authorized keys file for user: core May 17 00:22:43.689824 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 17 00:22:44.018409 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:22:44.035796 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 17 00:22:44.035796 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:22:44.035796 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:22:44.035796 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:22:44.035796 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:22:44.035796 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:22:44.035796 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:22:44.035796 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:22:44.035796 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:22:44.035796 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:22:44.035796 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:22:44.035796 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:22:44.035796 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:22:44.035796 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 May 17 00:22:44.815761 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 17 00:22:45.081382 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:22:45.081382 ignition[1156]: INFO : files: op(c): [started] processing unit "containerd.service" May 17 00:22:45.111725 ignition[1156]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 17 00:22:45.111725 ignition[1156]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 17 00:22:45.111725 ignition[1156]: INFO : files: op(c): [finished] processing unit "containerd.service" May 17 00:22:45.111725 ignition[1156]: INFO : files: op(e): [started] processing unit "prepare-helm.service" May 17 00:22:45.111725 ignition[1156]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:22:45.111725 ignition[1156]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:22:45.111725 ignition[1156]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" May 17 00:22:45.111725 ignition[1156]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 17 00:22:45.111725 ignition[1156]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:22:45.111725 ignition[1156]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:22:45.111725 ignition[1156]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:22:45.111725 ignition[1156]: INFO : files: files passed May 17 00:22:45.111725 ignition[1156]: INFO : POST message to Packet Timeline May 17 00:22:45.111725 ignition[1156]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 17 00:22:46.127723 ignition[1156]: INFO : GET result: OK May 17 00:22:46.533332 ignition[1156]: INFO : Ignition finished successfully May 17 00:22:46.536204 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 00:22:46.567738 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 00:22:46.568141 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 00:22:46.587139 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:22:46.587205 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 00:22:46.622735 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:22:46.640890 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 00:22:46.693864 initrd-setup-root-after-ignition[1196]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:22:46.693864 initrd-setup-root-after-ignition[1196]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 00:22:46.670910 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 00:22:46.723212 initrd-setup-root-after-ignition[1201]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:22:46.774726 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:22:46.774776 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 00:22:46.793781 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 00:22:46.816764 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 00:22:46.836871 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 00:22:46.854717 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 00:22:46.878845 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:22:46.910039 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 00:22:46.939334 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 00:22:46.951176 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:22:46.972366 systemd[1]: Stopped target timers.target - Timer Units. May 17 00:22:46.982497 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:22:46.982925 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:22:47.029986 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 00:22:47.040232 systemd[1]: Stopped target basic.target - Basic System. May 17 00:22:47.060233 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 00:22:47.078222 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:22:47.099226 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 00:22:47.109526 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 00:22:47.138236 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:22:47.148429 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 00:22:47.170416 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 00:22:47.187392 systemd[1]: Stopped target swap.target - Swaps. May 17 00:22:47.214057 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:22:47.214456 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 00:22:47.250008 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 00:22:47.260245 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:22:47.281076 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 00:22:47.281526 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:22:47.292280 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:22:47.292702 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 00:22:47.333215 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:22:47.333689 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:22:47.353388 systemd[1]: Stopped target paths.target - Path Units. May 17 00:22:47.363256 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:22:47.363686 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:22:47.380403 systemd[1]: Stopped target slices.target - Slice Units. May 17 00:22:47.401371 systemd[1]: Stopped target sockets.target - Socket Units. May 17 00:22:47.417338 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:22:47.417678 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:22:47.445254 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:22:47.445588 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:22:47.458464 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:22:47.458886 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:22:47.474457 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:22:47.474868 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 00:22:47.613748 ignition[1221]: INFO : Ignition 2.19.0 May 17 00:22:47.613748 ignition[1221]: INFO : Stage: umount May 17 00:22:47.613748 ignition[1221]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:22:47.613748 ignition[1221]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 00:22:47.613748 ignition[1221]: INFO : umount: umount passed May 17 00:22:47.613748 ignition[1221]: INFO : POST message to Packet Timeline May 17 00:22:47.613748 ignition[1221]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 17 00:22:47.503299 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 17 00:22:47.503725 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 00:22:47.532801 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 00:22:47.566401 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 00:22:47.577922 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:22:47.578355 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:22:47.605704 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:22:47.605784 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:22:47.641910 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:22:47.642450 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:22:47.642528 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 00:22:47.651366 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:22:47.651467 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 00:22:48.585995 ignition[1221]: INFO : GET result: OK May 17 00:22:48.999894 ignition[1221]: INFO : Ignition finished successfully May 17 00:22:49.001474 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:22:49.001644 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 00:22:49.019872 systemd[1]: Stopped target network.target - Network. May 17 00:22:49.034751 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:22:49.034942 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 00:22:49.053889 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:22:49.054049 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 00:22:49.072914 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:22:49.073070 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 00:22:49.092920 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 00:22:49.093083 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 00:22:49.112901 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:22:49.113069 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 00:22:49.132382 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 00:22:49.142626 systemd-networkd[935]: enp1s0f0np0: DHCPv6 lease lost May 17 00:22:49.150739 systemd-networkd[935]: enp1s0f1np1: DHCPv6 lease lost May 17 00:22:49.150986 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 00:22:49.170590 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:22:49.170865 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 00:22:49.191158 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:22:49.191603 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 00:22:49.211117 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:22:49.211241 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 00:22:49.240716 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 00:22:49.257658 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:22:49.257693 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:22:49.278790 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:22:49.278869 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:22:49.297848 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:22:49.297974 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 00:22:49.315899 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 00:22:49.316067 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:22:49.338251 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:22:49.358899 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:22:49.359275 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:22:49.390615 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:22:49.390759 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 00:22:49.397055 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:22:49.397154 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:22:49.423787 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:22:49.424016 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 00:22:49.454097 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:22:49.454379 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 00:22:49.798774 systemd-journald[267]: Received SIGTERM from PID 1 (systemd). May 17 00:22:49.798800 systemd-journald[267]: Failed to send stream file descriptor to service manager: Connection refused May 17 00:22:49.484098 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:22:49.484261 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:22:49.529628 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 00:22:49.555699 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:22:49.555759 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:22:49.576610 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 17 00:22:49.576644 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:22:49.598649 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:22:49.598707 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:22:49.617710 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:22:49.617810 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:22:49.640815 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:22:49.641076 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 00:22:49.660362 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:22:49.660619 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 00:22:49.682686 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 00:22:49.714942 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 00:22:49.733881 systemd[1]: Switching root. May 17 00:22:49.906168 systemd-journald[267]: Journal stopped May 17 00:22:52.373119 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:22:52.373134 kernel: SELinux: policy capability open_perms=1 May 17 00:22:52.373141 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:22:52.373147 kernel: SELinux: policy capability always_check_network=0 May 17 00:22:52.373152 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:22:52.373157 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:22:52.373163 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:22:52.373169 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:22:52.373174 kernel: audit: type=1403 audit(1747441370.156:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:22:52.373181 systemd[1]: Successfully loaded SELinux policy in 172.377ms. May 17 00:22:52.373188 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.099ms. May 17 00:22:52.373195 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:22:52.373201 systemd[1]: Detected architecture x86-64. May 17 00:22:52.373207 systemd[1]: Detected first boot. May 17 00:22:52.373214 systemd[1]: Hostname set to . May 17 00:22:52.373221 systemd[1]: Initializing machine ID from random generator. May 17 00:22:52.373228 zram_generator::config[1293]: No configuration found. May 17 00:22:52.373234 systemd[1]: Populated /etc with preset unit settings. May 17 00:22:52.373240 systemd[1]: Queued start job for default target multi-user.target. May 17 00:22:52.373247 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 17 00:22:52.373253 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 17 00:22:52.373259 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 17 00:22:52.373266 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 17 00:22:52.373273 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 17 00:22:52.373279 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 17 00:22:52.373286 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 17 00:22:52.373293 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 17 00:22:52.373299 systemd[1]: Created slice user.slice - User and Session Slice. May 17 00:22:52.373305 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:22:52.373312 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:22:52.373319 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 17 00:22:52.373325 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 17 00:22:52.373334 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 17 00:22:52.373340 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:22:52.373347 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... May 17 00:22:52.373353 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:22:52.373359 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 17 00:22:52.373367 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:22:52.373373 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:22:52.373380 systemd[1]: Reached target slices.target - Slice Units. May 17 00:22:52.373388 systemd[1]: Reached target swap.target - Swaps. May 17 00:22:52.373395 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 17 00:22:52.373401 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 17 00:22:52.373408 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:22:52.373415 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:22:52.373422 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:22:52.373428 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:22:52.373435 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:22:52.373441 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 17 00:22:52.373448 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 17 00:22:52.373456 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 17 00:22:52.373463 systemd[1]: Mounting media.mount - External Media Directory... May 17 00:22:52.373469 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:22:52.373476 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 17 00:22:52.373483 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 17 00:22:52.373489 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 17 00:22:52.373496 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 17 00:22:52.373506 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:22:52.373514 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:22:52.373520 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 17 00:22:52.373527 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:22:52.373533 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:22:52.373540 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:22:52.373547 kernel: ACPI: bus type drm_connector registered May 17 00:22:52.373553 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 17 00:22:52.373560 kernel: fuse: init (API version 7.39) May 17 00:22:52.373567 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:22:52.373574 kernel: loop: module loaded May 17 00:22:52.373580 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:22:52.373587 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 17 00:22:52.373594 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) May 17 00:22:52.373600 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:22:52.373615 systemd-journald[1416]: Collecting audit messages is disabled. May 17 00:22:52.373631 systemd-journald[1416]: Journal started May 17 00:22:52.373646 systemd-journald[1416]: Runtime Journal (/run/log/journal/239745f5d8d34c0b8914ae42ede03ce1) is 8.0M, max 639.9M, 631.9M free. May 17 00:22:52.402539 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:22:52.436553 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 17 00:22:52.470552 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 17 00:22:52.503550 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:22:52.554539 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:22:52.575719 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:22:52.585339 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 17 00:22:52.595760 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 17 00:22:52.605755 systemd[1]: Mounted media.mount - External Media Directory. May 17 00:22:52.615753 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 17 00:22:52.625725 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 17 00:22:52.635741 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 17 00:22:52.645915 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 17 00:22:52.656975 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:22:52.668050 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:22:52.668296 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 17 00:22:52.680341 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:22:52.680836 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:22:52.692377 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:22:52.692798 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:22:52.703352 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:22:52.703837 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:22:52.716387 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:22:52.716800 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 17 00:22:52.728361 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:22:52.728778 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:22:52.739906 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:22:52.750920 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 17 00:22:52.761886 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 17 00:22:52.772906 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:22:52.790758 systemd[1]: Reached target network-pre.target - Preparation for Network. May 17 00:22:52.812734 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 17 00:22:52.823452 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 17 00:22:52.832670 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:22:52.834535 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 17 00:22:52.846865 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 17 00:22:52.857634 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:22:52.858264 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 17 00:22:52.860527 systemd-journald[1416]: Time spent on flushing to /var/log/journal/239745f5d8d34c0b8914ae42ede03ce1 is 12.931ms for 1362 entries. May 17 00:22:52.860527 systemd-journald[1416]: System Journal (/var/log/journal/239745f5d8d34c0b8914ae42ede03ce1) is 8.0M, max 195.6M, 187.6M free. May 17 00:22:52.897383 systemd-journald[1416]: Received client request to flush runtime journal. May 17 00:22:52.875657 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:22:52.876279 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:22:52.911651 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:22:52.923307 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 17 00:22:52.923456 systemd-tmpfiles[1456]: ACLs are not supported, ignoring. May 17 00:22:52.923466 systemd-tmpfiles[1456]: ACLs are not supported, ignoring. May 17 00:22:52.935558 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 17 00:22:52.947632 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 17 00:22:52.958749 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 17 00:22:52.969750 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 17 00:22:52.980741 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:22:52.990762 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:22:53.004214 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 17 00:22:53.023792 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 17 00:22:53.034905 udevadm[1462]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 17 00:22:53.041021 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 17 00:22:53.065731 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:22:53.073464 systemd-tmpfiles[1474]: ACLs are not supported, ignoring. May 17 00:22:53.073473 systemd-tmpfiles[1474]: ACLs are not supported, ignoring. May 17 00:22:53.076857 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:22:53.227281 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 17 00:22:53.249778 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:22:53.262479 systemd-udevd[1481]: Using default interface naming scheme 'v255'. May 17 00:22:53.277370 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:22:53.293971 systemd[1]: Found device dev-ttyS1.device - /dev/ttyS1. May 17 00:22:53.307514 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 May 17 00:22:53.307557 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1489) May 17 00:22:53.307570 kernel: ACPI: button: Sleep Button [SLPB] May 17 00:22:53.370513 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 17 00:22:53.370573 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:22:53.376535 kernel: ACPI: button: Power Button [PWRF] May 17 00:22:53.422742 kernel: IPMI message handler: version 39.2 May 17 00:22:53.423765 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:22:53.461514 kernel: ipmi device interface May 17 00:22:53.485548 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5200_MTFDDAK480TDN OEM. May 17 00:22:53.497089 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set May 17 00:22:53.497314 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt May 17 00:22:53.532522 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) May 17 00:22:53.574882 kernel: ipmi_si: IPMI System Interface driver May 17 00:22:53.574903 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS May 17 00:22:53.595559 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 May 17 00:22:53.614979 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine May 17 00:22:53.634320 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI May 17 00:22:53.655331 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 May 17 00:22:53.677196 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI May 17 00:22:53.696044 kernel: ipmi_si: Adding ACPI-specified kcs state machine May 17 00:22:53.719019 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 May 17 00:22:53.734666 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 17 00:22:53.745339 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:22:53.750508 kernel: iTCO_vendor_support: vendor-support=0 May 17 00:22:53.750535 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface May 17 00:22:53.750674 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface May 17 00:22:53.767861 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 17 00:22:53.775519 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. May 17 00:22:53.890948 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) May 17 00:22:53.891089 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) May 17 00:22:53.891176 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) May 17 00:22:53.936598 systemd-networkd[1566]: lo: Link UP May 17 00:22:53.936602 systemd-networkd[1566]: lo: Gained carrier May 17 00:22:53.938934 systemd-networkd[1566]: bond0: netdev ready May 17 00:22:53.939845 systemd-networkd[1566]: Enumeration completed May 17 00:22:53.939997 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:22:53.940580 systemd-networkd[1566]: enp1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:97:fc:94.network. May 17 00:22:53.958616 kernel: intel_rapl_common: Found RAPL domain package May 17 00:22:53.958640 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized May 17 00:22:53.958736 kernel: intel_rapl_common: Found RAPL domain core May 17 00:22:53.990508 kernel: intel_rapl_common: Found RAPL domain dram May 17 00:22:54.026511 kernel: ipmi_ssif: IPMI SSIF Interface driver May 17 00:22:54.032634 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 17 00:22:54.043787 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:22:54.298543 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up May 17 00:22:54.324610 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link May 17 00:22:54.325767 systemd-networkd[1566]: enp1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:97:fc:95.network. May 17 00:22:54.512583 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up May 17 00:22:54.535159 systemd-networkd[1566]: bond0: Configuring with /etc/systemd/network/05-bond0.network. May 17 00:22:54.535546 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link May 17 00:22:54.537667 systemd-networkd[1566]: enp1s0f0np0: Link UP May 17 00:22:54.537885 systemd-networkd[1566]: enp1s0f0np0: Gained carrier May 17 00:22:54.558530 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond May 17 00:22:54.560440 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 17 00:22:54.567146 systemd-networkd[1566]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:97:fc:94.network. May 17 00:22:54.567333 systemd-networkd[1566]: enp1s0f1np1: Link UP May 17 00:22:54.567498 systemd-networkd[1566]: enp1s0f1np1: Gained carrier May 17 00:22:54.579687 systemd-networkd[1566]: bond0: Link UP May 17 00:22:54.579937 systemd-networkd[1566]: bond0: Gained carrier May 17 00:22:54.588660 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 17 00:22:54.596802 lvm[1601]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:22:54.632974 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 17 00:22:54.643949 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:22:54.667587 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex May 17 00:22:54.667611 kernel: bond0: active interface up! May 17 00:22:54.688654 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 17 00:22:54.690769 lvm[1604]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:22:54.722010 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 17 00:22:54.732922 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:22:54.743590 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:22:54.743604 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:22:54.753606 systemd[1]: Reached target machines.target - Containers. May 17 00:22:54.762183 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 17 00:22:54.794563 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex May 17 00:22:54.795610 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 17 00:22:54.807247 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 17 00:22:54.816649 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:22:54.824867 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 17 00:22:54.836223 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 17 00:22:54.848327 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 17 00:22:54.848793 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 17 00:22:54.867150 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:22:54.867507 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 17 00:22:54.868197 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 17 00:22:54.910551 kernel: loop0: detected capacity change from 0 to 8 May 17 00:22:54.932553 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:22:54.985514 kernel: loop1: detected capacity change from 0 to 142488 May 17 00:22:55.078556 kernel: loop2: detected capacity change from 0 to 140768 May 17 00:22:55.151566 kernel: loop3: detected capacity change from 0 to 221472 May 17 00:22:55.168089 ldconfig[1609]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:22:55.169485 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 17 00:22:55.218547 kernel: loop4: detected capacity change from 0 to 8 May 17 00:22:55.235532 kernel: loop5: detected capacity change from 0 to 142488 May 17 00:22:55.263568 kernel: loop6: detected capacity change from 0 to 140768 May 17 00:22:55.288568 kernel: loop7: detected capacity change from 0 to 221472 May 17 00:22:55.298092 (sd-merge)[1628]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. May 17 00:22:55.298318 (sd-merge)[1628]: Merged extensions into '/usr'. May 17 00:22:55.300345 systemd[1]: Reloading requested from client PID 1613 ('systemd-sysext') (unit systemd-sysext.service)... May 17 00:22:55.300351 systemd[1]: Reloading... May 17 00:22:55.335513 zram_generator::config[1655]: No configuration found. May 17 00:22:55.398710 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:22:55.450251 systemd[1]: Reloading finished in 149 ms. May 17 00:22:55.463645 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 17 00:22:55.493565 systemd[1]: Starting ensure-sysext.service... May 17 00:22:55.501236 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:22:55.514757 systemd[1]: Reloading requested from client PID 1716 ('systemctl') (unit ensure-sysext.service)... May 17 00:22:55.514764 systemd[1]: Reloading... May 17 00:22:55.521322 systemd-tmpfiles[1717]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:22:55.521574 systemd-tmpfiles[1717]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 17 00:22:55.522221 systemd-tmpfiles[1717]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:22:55.522470 systemd-tmpfiles[1717]: ACLs are not supported, ignoring. May 17 00:22:55.522528 systemd-tmpfiles[1717]: ACLs are not supported, ignoring. May 17 00:22:55.524412 systemd-tmpfiles[1717]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:22:55.524416 systemd-tmpfiles[1717]: Skipping /boot May 17 00:22:55.528454 systemd-tmpfiles[1717]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:22:55.528458 systemd-tmpfiles[1717]: Skipping /boot May 17 00:22:55.551567 zram_generator::config[1746]: No configuration found. May 17 00:22:55.612973 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:22:55.666384 systemd[1]: Reloading finished in 151 ms. May 17 00:22:55.680295 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:22:55.704580 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:22:55.714453 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 17 00:22:55.727275 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 17 00:22:55.732285 augenrules[1828]: No rules May 17 00:22:55.739606 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:22:55.750260 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 17 00:22:55.761973 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:22:55.771839 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 17 00:22:55.782795 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 17 00:22:55.809693 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:22:55.809818 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:22:55.810577 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:22:55.821339 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:22:55.845719 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:22:55.846353 systemd-resolved[1834]: Positive Trust Anchors: May 17 00:22:55.846363 systemd-resolved[1834]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:22:55.846400 systemd-resolved[1834]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:22:55.849520 systemd-resolved[1834]: Using system hostname 'ci-4081.3.3-n-750554c5a6'. May 17 00:22:55.855646 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:22:55.856477 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 17 00:22:55.865600 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:22:55.865660 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:22:55.877886 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:22:55.887963 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 17 00:22:55.898745 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:22:55.898828 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:22:55.909790 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:22:55.909870 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:22:55.921715 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:22:55.922189 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:22:55.931807 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 17 00:22:55.944074 systemd[1]: Reached target network.target - Network. May 17 00:22:55.952656 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:22:55.963731 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:22:55.963856 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:22:55.974670 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:22:55.985207 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:22:55.997290 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:22:56.006648 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:22:56.006736 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:22:56.006794 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:22:56.007489 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:22:56.007588 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:22:56.018844 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:22:56.018919 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:22:56.026621 systemd-networkd[1566]: bond0: Gained IPv6LL May 17 00:22:56.029834 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:22:56.029911 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:22:56.041918 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:22:56.042074 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:22:56.052742 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:22:56.063179 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:22:56.074179 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:22:56.087356 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:22:56.096740 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:22:56.096828 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:22:56.096887 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:22:56.097583 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:22:56.097676 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:22:56.108922 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:22:56.109025 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:22:56.118919 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:22:56.119040 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:22:56.130015 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:22:56.130169 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:22:56.142264 systemd[1]: Finished ensure-sysext.service. May 17 00:22:56.157205 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:22:56.157236 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:22:56.165784 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 17 00:22:56.214484 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 17 00:22:56.226842 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 17 00:22:56.237884 systemd[1]: Reached target network-online.target - Network is Online. May 17 00:22:56.247587 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:22:56.257626 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 17 00:22:56.268610 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 17 00:22:56.279579 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 17 00:22:56.290581 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:22:56.290597 systemd[1]: Reached target paths.target - Path Units. May 17 00:22:56.298571 systemd[1]: Reached target time-set.target - System Time Set. May 17 00:22:56.308656 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 17 00:22:56.318616 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 17 00:22:56.329571 systemd[1]: Reached target timers.target - Timer Units. May 17 00:22:56.337845 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 17 00:22:56.348280 systemd[1]: Starting docker.socket - Docker Socket for the API... May 17 00:22:56.357071 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 17 00:22:56.366836 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 17 00:22:56.376602 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:22:56.385578 systemd[1]: Reached target basic.target - Basic System. May 17 00:22:56.393659 systemd[1]: System is tainted: cgroupsv1 May 17 00:22:56.393680 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 17 00:22:56.393693 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 17 00:22:56.402575 systemd[1]: Starting containerd.service - containerd container runtime... May 17 00:22:56.413324 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 17 00:22:56.423141 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 17 00:22:56.432184 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 17 00:22:56.435866 coreos-metadata[1891]: May 17 00:22:56.435 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 00:22:56.441930 dbus-daemon[1892]: [system] SELinux support is enabled May 17 00:22:56.442268 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 17 00:22:56.444078 jq[1895]: false May 17 00:22:56.452577 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 17 00:22:56.453472 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:22:56.460712 extend-filesystems[1897]: Found loop4 May 17 00:22:56.460712 extend-filesystems[1897]: Found loop5 May 17 00:22:56.508786 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks May 17 00:22:56.508860 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1912) May 17 00:22:56.463561 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 17 00:22:56.508993 extend-filesystems[1897]: Found loop6 May 17 00:22:56.508993 extend-filesystems[1897]: Found loop7 May 17 00:22:56.508993 extend-filesystems[1897]: Found sda May 17 00:22:56.508993 extend-filesystems[1897]: Found sda1 May 17 00:22:56.508993 extend-filesystems[1897]: Found sda2 May 17 00:22:56.508993 extend-filesystems[1897]: Found sda3 May 17 00:22:56.508993 extend-filesystems[1897]: Found usr May 17 00:22:56.508993 extend-filesystems[1897]: Found sda4 May 17 00:22:56.508993 extend-filesystems[1897]: Found sda6 May 17 00:22:56.508993 extend-filesystems[1897]: Found sda7 May 17 00:22:56.508993 extend-filesystems[1897]: Found sda9 May 17 00:22:56.508993 extend-filesystems[1897]: Checking size of /dev/sda9 May 17 00:22:56.508993 extend-filesystems[1897]: Resized partition /dev/sda9 May 17 00:22:56.638762 extend-filesystems[1907]: resize2fs 1.47.1 (20-May-2024) May 17 00:22:56.509629 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 17 00:22:56.524252 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 17 00:22:56.539292 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 17 00:22:56.578610 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 17 00:22:56.616681 systemd[1]: Starting systemd-logind.service - User Login Management... May 17 00:22:56.643573 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... May 17 00:22:56.667954 systemd-logind[1937]: Watching system buttons on /dev/input/event3 (Power Button) May 17 00:22:56.667965 systemd-logind[1937]: Watching system buttons on /dev/input/event2 (Sleep Button) May 17 00:22:56.667974 systemd-logind[1937]: Watching system buttons on /dev/input/event0 (HID 0557:2419) May 17 00:22:56.668282 systemd-logind[1937]: New seat seat0. May 17 00:22:56.683604 systemd[1]: Starting update-engine.service - Update Engine... May 17 00:22:56.691551 update_engine[1942]: I20250517 00:22:56.691474 1942 main.cc:92] Flatcar Update Engine starting May 17 00:22:56.692244 update_engine[1942]: I20250517 00:22:56.692201 1942 update_check_scheduler.cc:74] Next update check in 10m53s May 17 00:22:56.692235 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 17 00:22:56.703821 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 17 00:22:56.705425 jq[1945]: true May 17 00:22:56.715101 systemd[1]: Started systemd-logind.service - User Login Management. May 17 00:22:56.725070 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:22:56.725192 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 17 00:22:56.725561 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:22:56.725672 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 17 00:22:56.735734 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 17 00:22:56.746067 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:22:56.746197 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 17 00:22:56.760424 (ntainerd)[1952]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 17 00:22:56.761960 jq[1951]: true May 17 00:22:56.764083 dbus-daemon[1892]: [system] Successfully activated service 'org.freedesktop.systemd1' May 17 00:22:56.766435 tar[1950]: linux-amd64/helm May 17 00:22:56.772564 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. May 17 00:22:56.772704 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. May 17 00:22:56.773108 systemd[1]: Started update-engine.service - Update Engine. May 17 00:22:56.783234 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:22:56.783336 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 17 00:22:56.794644 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:22:56.794721 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 17 00:22:56.806104 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 00:22:56.825710 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 17 00:22:56.845354 sshd_keygen[1940]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:22:56.853329 bash[1982]: Updated "/home/core/.ssh/authorized_keys" May 17 00:22:56.853811 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 17 00:22:56.855692 locksmithd[1985]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:22:56.865995 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 17 00:22:56.889688 systemd[1]: Starting issuegen.service - Generate /run/issue... May 17 00:22:56.898563 systemd[1]: Starting sshkeys.service... May 17 00:22:56.905893 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:22:56.906019 systemd[1]: Finished issuegen.service - Generate /run/issue. May 17 00:22:56.918082 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 17 00:22:56.929345 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 17 00:22:56.941384 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 17 00:22:56.952966 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 17 00:22:56.959717 containerd[1952]: time="2025-05-17T00:22:56.959635047Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 17 00:22:56.963696 coreos-metadata[2018]: May 17 00:22:56.963 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 00:22:56.972917 containerd[1952]: time="2025-05-17T00:22:56.972875347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:22:56.973691 containerd[1952]: time="2025-05-17T00:22:56.973645853Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:22:56.973691 containerd[1952]: time="2025-05-17T00:22:56.973663241Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:22:56.973691 containerd[1952]: time="2025-05-17T00:22:56.973673114Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:22:56.973765 containerd[1952]: time="2025-05-17T00:22:56.973754397Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 17 00:22:56.973785 containerd[1952]: time="2025-05-17T00:22:56.973765740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 17 00:22:56.973851 containerd[1952]: time="2025-05-17T00:22:56.973799694Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:22:56.973851 containerd[1952]: time="2025-05-17T00:22:56.973808063Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:22:56.973963 containerd[1952]: time="2025-05-17T00:22:56.973926062Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:22:56.973963 containerd[1952]: time="2025-05-17T00:22:56.973935359Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:22:56.973963 containerd[1952]: time="2025-05-17T00:22:56.973944312Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:22:56.973963 containerd[1952]: time="2025-05-17T00:22:56.973949973Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:22:56.974030 containerd[1952]: time="2025-05-17T00:22:56.973990001Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:22:56.974238 containerd[1952]: time="2025-05-17T00:22:56.974202880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:22:56.974284 containerd[1952]: time="2025-05-17T00:22:56.974276405Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:22:56.974304 containerd[1952]: time="2025-05-17T00:22:56.974284869Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:22:56.974330 containerd[1952]: time="2025-05-17T00:22:56.974323894Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:22:56.974357 containerd[1952]: time="2025-05-17T00:22:56.974350928Z" level=info msg="metadata content store policy set" policy=shared May 17 00:22:56.984781 systemd[1]: Started getty@tty1.service - Getty on tty1. May 17 00:22:56.997406 containerd[1952]: time="2025-05-17T00:22:56.997359382Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:22:56.997447 containerd[1952]: time="2025-05-17T00:22:56.997394345Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:22:56.997447 containerd[1952]: time="2025-05-17T00:22:56.997427564Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 17 00:22:56.997447 containerd[1952]: time="2025-05-17T00:22:56.997444619Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 17 00:22:56.997496 containerd[1952]: time="2025-05-17T00:22:56.997454372Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:22:56.997596 containerd[1952]: time="2025-05-17T00:22:56.997549546Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:22:56.997803 containerd[1952]: time="2025-05-17T00:22:56.997760167Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:22:56.997878 containerd[1952]: time="2025-05-17T00:22:56.997836706Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 17 00:22:56.997878 containerd[1952]: time="2025-05-17T00:22:56.997848425Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 17 00:22:56.997878 containerd[1952]: time="2025-05-17T00:22:56.997856185Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 17 00:22:56.997878 containerd[1952]: time="2025-05-17T00:22:56.997864976Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:22:56.997878 containerd[1952]: time="2025-05-17T00:22:56.997874941Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:22:56.997964 containerd[1952]: time="2025-05-17T00:22:56.997883360Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:22:56.997964 containerd[1952]: time="2025-05-17T00:22:56.997891401Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:22:56.997964 containerd[1952]: time="2025-05-17T00:22:56.997900104Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:22:56.997964 containerd[1952]: time="2025-05-17T00:22:56.997907924Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:22:56.997964 containerd[1952]: time="2025-05-17T00:22:56.997914698Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:22:56.997964 containerd[1952]: time="2025-05-17T00:22:56.997921498Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:22:56.997964 containerd[1952]: time="2025-05-17T00:22:56.997933465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:22:56.997964 containerd[1952]: time="2025-05-17T00:22:56.997941208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:22:56.997964 containerd[1952]: time="2025-05-17T00:22:56.997949878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:22:56.997964 containerd[1952]: time="2025-05-17T00:22:56.997958329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:22:56.998093 containerd[1952]: time="2025-05-17T00:22:56.997965914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:22:56.998093 containerd[1952]: time="2025-05-17T00:22:56.997973528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:22:56.998093 containerd[1952]: time="2025-05-17T00:22:56.997980229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:22:56.998093 containerd[1952]: time="2025-05-17T00:22:56.997987104Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:22:56.998093 containerd[1952]: time="2025-05-17T00:22:56.997994194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 17 00:22:56.998093 containerd[1952]: time="2025-05-17T00:22:56.998002641Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 17 00:22:56.998093 containerd[1952]: time="2025-05-17T00:22:56.998009468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:22:56.998093 containerd[1952]: time="2025-05-17T00:22:56.998015782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 17 00:22:56.998093 containerd[1952]: time="2025-05-17T00:22:56.998024780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:22:56.998093 containerd[1952]: time="2025-05-17T00:22:56.998033465Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 17 00:22:56.998093 containerd[1952]: time="2025-05-17T00:22:56.998045560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 17 00:22:56.998093 containerd[1952]: time="2025-05-17T00:22:56.998052320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:22:56.998093 containerd[1952]: time="2025-05-17T00:22:56.998058302Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:22:56.998093 containerd[1952]: time="2025-05-17T00:22:56.998082971Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:22:56.998272 containerd[1952]: time="2025-05-17T00:22:56.998094938Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 17 00:22:56.998272 containerd[1952]: time="2025-05-17T00:22:56.998102950Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:22:56.998272 containerd[1952]: time="2025-05-17T00:22:56.998110048Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 17 00:22:56.998272 containerd[1952]: time="2025-05-17T00:22:56.998116747Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:22:56.998272 containerd[1952]: time="2025-05-17T00:22:56.998126692Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 17 00:22:56.998272 containerd[1952]: time="2025-05-17T00:22:56.998135755Z" level=info msg="NRI interface is disabled by configuration." May 17 00:22:56.998272 containerd[1952]: time="2025-05-17T00:22:56.998141624Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:22:56.998370 containerd[1952]: time="2025-05-17T00:22:56.998314896Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:22:56.998370 containerd[1952]: time="2025-05-17T00:22:56.998350236Z" level=info msg="Connect containerd service" May 17 00:22:56.998370 containerd[1952]: time="2025-05-17T00:22:56.998368807Z" level=info msg="using legacy CRI server" May 17 00:22:56.998475 containerd[1952]: time="2025-05-17T00:22:56.998373671Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 17 00:22:56.998475 containerd[1952]: time="2025-05-17T00:22:56.998428287Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:22:56.999145 containerd[1952]: time="2025-05-17T00:22:56.999127927Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:22:56.999274 containerd[1952]: time="2025-05-17T00:22:56.999254182Z" level=info msg="Start subscribing containerd event" May 17 00:22:56.999296 containerd[1952]: time="2025-05-17T00:22:56.999285315Z" level=info msg="Start recovering state" May 17 00:22:56.999344 containerd[1952]: time="2025-05-17T00:22:56.999334466Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:22:56.999371 containerd[1952]: time="2025-05-17T00:22:56.999363438Z" level=info msg="Start event monitor" May 17 00:22:56.999389 containerd[1952]: time="2025-05-17T00:22:56.999372489Z" level=info msg="Start snapshots syncer" May 17 00:22:56.999389 containerd[1952]: time="2025-05-17T00:22:56.999375881Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:22:56.999422 containerd[1952]: time="2025-05-17T00:22:56.999378142Z" level=info msg="Start cni network conf syncer for default" May 17 00:22:56.999447 containerd[1952]: time="2025-05-17T00:22:56.999424686Z" level=info msg="Start streaming server" May 17 00:22:56.999475 containerd[1952]: time="2025-05-17T00:22:56.999467615Z" level=info msg="containerd successfully booted in 0.040279s" May 17 00:22:57.004780 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. May 17 00:22:57.014751 systemd[1]: Reached target getty.target - Login Prompts. May 17 00:22:57.022864 systemd[1]: Started containerd.service - containerd container runtime. May 17 00:22:57.071125 tar[1950]: linux-amd64/LICENSE May 17 00:22:57.071198 tar[1950]: linux-amd64/README.md May 17 00:22:57.078552 kernel: EXT4-fs (sda9): resized filesystem to 116605649 May 17 00:22:57.100071 extend-filesystems[1907]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 17 00:22:57.100071 extend-filesystems[1907]: old_desc_blocks = 1, new_desc_blocks = 56 May 17 00:22:57.100071 extend-filesystems[1907]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. May 17 00:22:57.140595 extend-filesystems[1897]: Resized filesystem in /dev/sda9 May 17 00:22:57.140595 extend-filesystems[1897]: Found sdb May 17 00:22:57.100785 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:22:57.100919 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 17 00:22:57.153940 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 17 00:22:57.533494 kernel: mlx5_core 0000:01:00.0: lag map: port 1:1 port 2:2 May 17 00:22:57.533647 kernel: mlx5_core 0000:01:00.0: shared_fdb:0 mode:queue_affinity May 17 00:22:57.637680 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:22:57.650047 (kubelet)[2059]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:22:58.143304 kubelet[2059]: E0517 00:22:58.143272 2059 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:22:58.144296 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:22:58.144380 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:22:58.272587 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 17 00:22:58.289826 systemd[1]: Started sshd@0-147.75.202.203:22-147.75.109.163:55998.service - OpenSSH per-connection server daemon (147.75.109.163:55998). May 17 00:22:58.334426 sshd[2078]: Accepted publickey for core from 147.75.109.163 port 55998 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:22:58.335450 sshd[2078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:22:58.341335 systemd-logind[1937]: New session 1 of user core. May 17 00:22:58.342336 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 17 00:22:58.367959 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 17 00:22:58.380934 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 17 00:22:58.402900 systemd[1]: Starting user@500.service - User Manager for UID 500... May 17 00:22:58.414173 (systemd)[2084]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:22:58.490282 systemd[2084]: Queued start job for default target default.target. May 17 00:22:58.490451 systemd[2084]: Created slice app.slice - User Application Slice. May 17 00:22:58.490463 systemd[2084]: Reached target paths.target - Paths. May 17 00:22:58.490471 systemd[2084]: Reached target timers.target - Timers. May 17 00:22:58.494711 systemd-timesyncd[1883]: Contacted time server 23.186.168.123:123 (0.flatcar.pool.ntp.org). May 17 00:22:58.494735 systemd-timesyncd[1883]: Initial clock synchronization to Sat 2025-05-17 00:22:58.619100 UTC. May 17 00:22:58.510720 systemd[2084]: Starting dbus.socket - D-Bus User Message Bus Socket... May 17 00:22:58.514011 systemd[2084]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 17 00:22:58.514039 systemd[2084]: Reached target sockets.target - Sockets. May 17 00:22:58.514048 systemd[2084]: Reached target basic.target - Basic System. May 17 00:22:58.514069 systemd[2084]: Reached target default.target - Main User Target. May 17 00:22:58.514083 systemd[2084]: Startup finished in 94ms. May 17 00:22:58.514244 systemd[1]: Started user@500.service - User Manager for UID 500. May 17 00:22:58.524521 systemd[1]: Started session-1.scope - Session 1 of User core. May 17 00:22:58.599740 systemd[1]: Started sshd@1-147.75.202.203:22-147.75.109.163:35156.service - OpenSSH per-connection server daemon (147.75.109.163:35156). May 17 00:22:58.625709 sshd[2097]: Accepted publickey for core from 147.75.109.163 port 35156 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:22:58.626340 sshd[2097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:22:58.628888 systemd-logind[1937]: New session 2 of user core. May 17 00:22:58.640704 systemd[1]: Started session-2.scope - Session 2 of User core. May 17 00:22:58.700418 sshd[2097]: pam_unix(sshd:session): session closed for user core May 17 00:22:58.720279 systemd[1]: Started sshd@2-147.75.202.203:22-147.75.109.163:35168.service - OpenSSH per-connection server daemon (147.75.109.163:35168). May 17 00:22:58.734479 systemd[1]: sshd@1-147.75.202.203:22-147.75.109.163:35156.service: Deactivated successfully. May 17 00:22:58.738007 systemd[1]: session-2.scope: Deactivated successfully. May 17 00:22:58.738761 systemd-logind[1937]: Session 2 logged out. Waiting for processes to exit. May 17 00:22:58.739513 systemd-logind[1937]: Removed session 2. May 17 00:22:58.760934 sshd[2103]: Accepted publickey for core from 147.75.109.163 port 35168 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:22:58.761548 sshd[2103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:22:58.763989 systemd-logind[1937]: New session 3 of user core. May 17 00:22:58.772827 systemd[1]: Started session-3.scope - Session 3 of User core. May 17 00:22:58.844313 sshd[2103]: pam_unix(sshd:session): session closed for user core May 17 00:22:58.850565 systemd[1]: sshd@2-147.75.202.203:22-147.75.109.163:35168.service: Deactivated successfully. May 17 00:22:58.856551 systemd-logind[1937]: Session 3 logged out. Waiting for processes to exit. May 17 00:22:58.857143 systemd[1]: session-3.scope: Deactivated successfully. May 17 00:22:58.859984 systemd-logind[1937]: Removed session 3. May 17 00:22:59.272133 coreos-metadata[2018]: May 17 00:22:59.272 INFO Fetch successful May 17 00:22:59.311163 coreos-metadata[1891]: May 17 00:22:59.311 INFO Fetch successful May 17 00:22:59.313707 unknown[2018]: wrote ssh authorized keys file for user: core May 17 00:22:59.334530 update-ssh-keys[2115]: Updated "/home/core/.ssh/authorized_keys" May 17 00:22:59.334892 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 17 00:22:59.360247 systemd[1]: Finished sshkeys.service. May 17 00:22:59.371614 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 17 00:22:59.382921 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... May 17 00:22:59.801288 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. May 17 00:22:59.814054 systemd[1]: Reached target multi-user.target - Multi-User System. May 17 00:22:59.824843 systemd[1]: Startup finished in 26.803s (kernel) + 9.840s (userspace) = 36.643s. May 17 00:22:59.872059 login[2030]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 17 00:22:59.872659 login[2033]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 17 00:22:59.875141 systemd-logind[1937]: New session 5 of user core. May 17 00:22:59.875694 systemd[1]: Started session-5.scope - Session 5 of User core. May 17 00:22:59.877106 systemd-logind[1937]: New session 4 of user core. May 17 00:22:59.877414 systemd[1]: Started session-4.scope - Session 4 of User core. May 17 00:23:08.215426 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:23:08.229755 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:23:08.538047 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:23:08.540233 (kubelet)[2173]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:23:08.561893 kubelet[2173]: E0517 00:23:08.561840 2173 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:23:08.564109 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:23:08.564210 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:23:08.955974 systemd[1]: Started sshd@3-147.75.202.203:22-147.75.109.163:43756.service - OpenSSH per-connection server daemon (147.75.109.163:43756). May 17 00:23:08.983736 sshd[2190]: Accepted publickey for core from 147.75.109.163 port 43756 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:23:08.984467 sshd[2190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:08.987450 systemd-logind[1937]: New session 6 of user core. May 17 00:23:08.988300 systemd[1]: Started session-6.scope - Session 6 of User core. May 17 00:23:09.043258 sshd[2190]: pam_unix(sshd:session): session closed for user core May 17 00:23:09.052818 systemd[1]: Started sshd@4-147.75.202.203:22-147.75.109.163:43758.service - OpenSSH per-connection server daemon (147.75.109.163:43758). May 17 00:23:09.053181 systemd[1]: sshd@3-147.75.202.203:22-147.75.109.163:43756.service: Deactivated successfully. May 17 00:23:09.054063 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:23:09.054430 systemd-logind[1937]: Session 6 logged out. Waiting for processes to exit. May 17 00:23:09.055356 systemd-logind[1937]: Removed session 6. May 17 00:23:09.081467 sshd[2196]: Accepted publickey for core from 147.75.109.163 port 43758 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:23:09.082633 sshd[2196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:09.087362 systemd-logind[1937]: New session 7 of user core. May 17 00:23:09.101071 systemd[1]: Started session-7.scope - Session 7 of User core. May 17 00:23:09.160308 sshd[2196]: pam_unix(sshd:session): session closed for user core May 17 00:23:09.176212 systemd[1]: Started sshd@5-147.75.202.203:22-147.75.109.163:43768.service - OpenSSH per-connection server daemon (147.75.109.163:43768). May 17 00:23:09.177865 systemd[1]: sshd@4-147.75.202.203:22-147.75.109.163:43758.service: Deactivated successfully. May 17 00:23:09.181842 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:23:09.183724 systemd-logind[1937]: Session 7 logged out. Waiting for processes to exit. May 17 00:23:09.187332 systemd-logind[1937]: Removed session 7. May 17 00:23:09.228280 sshd[2204]: Accepted publickey for core from 147.75.109.163 port 43768 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:23:09.228891 sshd[2204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:09.231336 systemd-logind[1937]: New session 8 of user core. May 17 00:23:09.250811 systemd[1]: Started session-8.scope - Session 8 of User core. May 17 00:23:09.313097 sshd[2204]: pam_unix(sshd:session): session closed for user core May 17 00:23:09.333241 systemd[1]: Started sshd@6-147.75.202.203:22-147.75.109.163:43778.service - OpenSSH per-connection server daemon (147.75.109.163:43778). May 17 00:23:09.335109 systemd[1]: sshd@5-147.75.202.203:22-147.75.109.163:43768.service: Deactivated successfully. May 17 00:23:09.338889 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:23:09.340780 systemd-logind[1937]: Session 8 logged out. Waiting for processes to exit. May 17 00:23:09.344358 systemd-logind[1937]: Removed session 8. May 17 00:23:09.386729 sshd[2212]: Accepted publickey for core from 147.75.109.163 port 43778 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:23:09.388440 sshd[2212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:09.394631 systemd-logind[1937]: New session 9 of user core. May 17 00:23:09.412309 systemd[1]: Started session-9.scope - Session 9 of User core. May 17 00:23:09.483954 sudo[2218]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 17 00:23:09.484104 sudo[2218]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:23:09.498198 sudo[2218]: pam_unix(sudo:session): session closed for user root May 17 00:23:09.499172 sshd[2212]: pam_unix(sshd:session): session closed for user core May 17 00:23:09.520913 systemd[1]: Started sshd@7-147.75.202.203:22-147.75.109.163:43790.service - OpenSSH per-connection server daemon (147.75.109.163:43790). May 17 00:23:09.521425 systemd[1]: sshd@6-147.75.202.203:22-147.75.109.163:43778.service: Deactivated successfully. May 17 00:23:09.523356 systemd-logind[1937]: Session 9 logged out. Waiting for processes to exit. May 17 00:23:09.523574 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:23:09.524483 systemd-logind[1937]: Removed session 9. May 17 00:23:09.549776 sshd[2220]: Accepted publickey for core from 147.75.109.163 port 43790 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:23:09.550589 sshd[2220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:09.553518 systemd-logind[1937]: New session 10 of user core. May 17 00:23:09.562777 systemd[1]: Started session-10.scope - Session 10 of User core. May 17 00:23:09.625671 sudo[2228]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 17 00:23:09.626478 sudo[2228]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:23:09.635352 sudo[2228]: pam_unix(sudo:session): session closed for user root May 17 00:23:09.649096 sudo[2227]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 17 00:23:09.649962 sudo[2227]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:23:09.690803 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 17 00:23:09.691765 auditctl[2231]: No rules May 17 00:23:09.692314 systemd[1]: audit-rules.service: Deactivated successfully. May 17 00:23:09.692483 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 17 00:23:09.693602 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:23:09.712485 augenrules[2250]: No rules May 17 00:23:09.713078 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:23:09.713936 sudo[2227]: pam_unix(sudo:session): session closed for user root May 17 00:23:09.715266 sshd[2220]: pam_unix(sshd:session): session closed for user core May 17 00:23:09.717889 systemd[1]: Started sshd@8-147.75.202.203:22-147.75.109.163:43802.service - OpenSSH per-connection server daemon (147.75.109.163:43802). May 17 00:23:09.718393 systemd[1]: sshd@7-147.75.202.203:22-147.75.109.163:43790.service: Deactivated successfully. May 17 00:23:09.720476 systemd-logind[1937]: Session 10 logged out. Waiting for processes to exit. May 17 00:23:09.720744 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:23:09.721598 systemd-logind[1937]: Removed session 10. May 17 00:23:09.757290 sshd[2257]: Accepted publickey for core from 147.75.109.163 port 43802 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:23:09.758620 sshd[2257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:09.763437 systemd-logind[1937]: New session 11 of user core. May 17 00:23:09.779245 systemd[1]: Started session-11.scope - Session 11 of User core. May 17 00:23:09.849203 sudo[2263]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:23:09.850043 sudo[2263]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:23:10.274211 systemd[1]: Starting docker.service - Docker Application Container Engine... May 17 00:23:10.274743 (dockerd)[2290]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 17 00:23:10.537477 dockerd[2290]: time="2025-05-17T00:23:10.537397605Z" level=info msg="Starting up" May 17 00:23:10.719205 dockerd[2290]: time="2025-05-17T00:23:10.719118443Z" level=info msg="Loading containers: start." May 17 00:23:10.798557 kernel: Initializing XFRM netlink socket May 17 00:23:10.851073 systemd-networkd[1566]: docker0: Link UP May 17 00:23:10.865398 dockerd[2290]: time="2025-05-17T00:23:10.865381606Z" level=info msg="Loading containers: done." May 17 00:23:10.874945 dockerd[2290]: time="2025-05-17T00:23:10.874921517Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:23:10.875036 dockerd[2290]: time="2025-05-17T00:23:10.874984932Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 17 00:23:10.875068 dockerd[2290]: time="2025-05-17T00:23:10.875055153Z" level=info msg="Daemon has completed initialization" May 17 00:23:10.889752 dockerd[2290]: time="2025-05-17T00:23:10.889718428Z" level=info msg="API listen on /run/docker.sock" May 17 00:23:10.889851 systemd[1]: Started docker.service - Docker Application Container Engine. May 17 00:23:11.958976 containerd[1952]: time="2025-05-17T00:23:11.958855801Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 17 00:23:12.957731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3969931975.mount: Deactivated successfully. May 17 00:23:13.684804 containerd[1952]: time="2025-05-17T00:23:13.684750627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:13.685032 containerd[1952]: time="2025-05-17T00:23:13.684935326Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.9: active requests=0, bytes read=28078845" May 17 00:23:13.685342 containerd[1952]: time="2025-05-17T00:23:13.685306605Z" level=info msg="ImageCreate event name:\"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:13.687217 containerd[1952]: time="2025-05-17T00:23:13.687175837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:13.687686 containerd[1952]: time="2025-05-17T00:23:13.687646301Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.9\" with image id \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\", size \"28075645\" in 1.728713516s" May 17 00:23:13.687686 containerd[1952]: time="2025-05-17T00:23:13.687662794Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\"" May 17 00:23:13.687987 containerd[1952]: time="2025-05-17T00:23:13.687974120Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 17 00:23:14.679828 containerd[1952]: time="2025-05-17T00:23:14.679804606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:14.680059 containerd[1952]: time="2025-05-17T00:23:14.680031887Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.9: active requests=0, bytes read=24713522" May 17 00:23:14.680592 containerd[1952]: time="2025-05-17T00:23:14.680569526Z" level=info msg="ImageCreate event name:\"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:14.682620 containerd[1952]: time="2025-05-17T00:23:14.682557973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:14.683731 containerd[1952]: time="2025-05-17T00:23:14.683684141Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.9\" with image id \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\", size \"26315362\" in 995.692917ms" May 17 00:23:14.683731 containerd[1952]: time="2025-05-17T00:23:14.683701737Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\"" May 17 00:23:14.683986 containerd[1952]: time="2025-05-17T00:23:14.683943702Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 17 00:23:15.561630 containerd[1952]: time="2025-05-17T00:23:15.561605660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:15.561858 containerd[1952]: time="2025-05-17T00:23:15.561814805Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.9: active requests=0, bytes read=18784311" May 17 00:23:15.562201 containerd[1952]: time="2025-05-17T00:23:15.562155277Z" level=info msg="ImageCreate event name:\"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:15.563775 containerd[1952]: time="2025-05-17T00:23:15.563734799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:15.564456 containerd[1952]: time="2025-05-17T00:23:15.564410150Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.9\" with image id \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\", size \"20386169\" in 880.450054ms" May 17 00:23:15.564456 containerd[1952]: time="2025-05-17T00:23:15.564431899Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\"" May 17 00:23:15.564723 containerd[1952]: time="2025-05-17T00:23:15.564685211Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 17 00:23:16.338735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3467293876.mount: Deactivated successfully. May 17 00:23:16.526901 containerd[1952]: time="2025-05-17T00:23:16.526873565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:16.527066 containerd[1952]: time="2025-05-17T00:23:16.527040279Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.9: active requests=0, bytes read=30355623" May 17 00:23:16.527507 containerd[1952]: time="2025-05-17T00:23:16.527488699Z" level=info msg="ImageCreate event name:\"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:16.528562 containerd[1952]: time="2025-05-17T00:23:16.528517403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:16.528823 containerd[1952]: time="2025-05-17T00:23:16.528780197Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.9\" with image id \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\", repo tag \"registry.k8s.io/kube-proxy:v1.31.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\", size \"30354642\" in 964.080089ms" May 17 00:23:16.528823 containerd[1952]: time="2025-05-17T00:23:16.528798428Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\"" May 17 00:23:16.529090 containerd[1952]: time="2025-05-17T00:23:16.529046345Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 00:23:17.018812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4107540631.mount: Deactivated successfully. May 17 00:23:17.550381 containerd[1952]: time="2025-05-17T00:23:17.550330582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:17.550611 containerd[1952]: time="2025-05-17T00:23:17.550512628Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 17 00:23:17.551030 containerd[1952]: time="2025-05-17T00:23:17.550990419Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:17.552883 containerd[1952]: time="2025-05-17T00:23:17.552835101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:17.553388 containerd[1952]: time="2025-05-17T00:23:17.553348077Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.02428587s" May 17 00:23:17.553388 containerd[1952]: time="2025-05-17T00:23:17.553363008Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 17 00:23:17.553659 containerd[1952]: time="2025-05-17T00:23:17.553621093Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:23:18.002676 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3251820731.mount: Deactivated successfully. May 17 00:23:18.003705 containerd[1952]: time="2025-05-17T00:23:18.003689278Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:18.003913 containerd[1952]: time="2025-05-17T00:23:18.003892881Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 17 00:23:18.004257 containerd[1952]: time="2025-05-17T00:23:18.004242223Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:18.005364 containerd[1952]: time="2025-05-17T00:23:18.005351034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:18.005874 containerd[1952]: time="2025-05-17T00:23:18.005860235Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 452.225308ms" May 17 00:23:18.005920 containerd[1952]: time="2025-05-17T00:23:18.005877282Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 17 00:23:18.006189 containerd[1952]: time="2025-05-17T00:23:18.006179241Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 17 00:23:18.521113 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4047319797.mount: Deactivated successfully. May 17 00:23:18.709404 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 00:23:18.723686 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:23:19.007824 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:23:19.010129 (kubelet)[2643]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:23:19.093088 kubelet[2643]: E0517 00:23:19.093061 2643 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:23:19.094403 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:23:19.094517 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:23:19.545762 containerd[1952]: time="2025-05-17T00:23:19.545737729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:19.545995 containerd[1952]: time="2025-05-17T00:23:19.545939554Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 17 00:23:19.546374 containerd[1952]: time="2025-05-17T00:23:19.546362952Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:19.548414 containerd[1952]: time="2025-05-17T00:23:19.548401223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:19.548941 containerd[1952]: time="2025-05-17T00:23:19.548925464Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 1.542731366s" May 17 00:23:19.548941 containerd[1952]: time="2025-05-17T00:23:19.548940201Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 17 00:23:21.394382 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:23:21.402836 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:23:21.420077 systemd[1]: Reloading requested from client PID 2718 ('systemctl') (unit session-11.scope)... May 17 00:23:21.420084 systemd[1]: Reloading... May 17 00:23:21.454615 zram_generator::config[2757]: No configuration found. May 17 00:23:21.525426 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:23:21.585836 systemd[1]: Reloading finished in 165 ms. May 17 00:23:21.639081 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 17 00:23:21.639287 systemd[1]: kubelet.service: Failed with result 'signal'. May 17 00:23:21.639903 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:23:21.644341 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:23:21.895234 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:23:21.897410 (kubelet)[2834]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:23:21.920895 kubelet[2834]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:23:21.920895 kubelet[2834]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:23:21.920895 kubelet[2834]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:23:21.921120 kubelet[2834]: I0517 00:23:21.920926 2834 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:23:22.340510 kubelet[2834]: I0517 00:23:22.340465 2834 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:23:22.340510 kubelet[2834]: I0517 00:23:22.340478 2834 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:23:22.340710 kubelet[2834]: I0517 00:23:22.340675 2834 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:23:22.380469 kubelet[2834]: E0517 00:23:22.380402 2834 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://147.75.202.203:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 147.75.202.203:6443: connect: connection refused" logger="UnhandledError" May 17 00:23:22.382201 kubelet[2834]: I0517 00:23:22.382167 2834 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:23:22.386531 kubelet[2834]: E0517 00:23:22.386495 2834 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:23:22.386582 kubelet[2834]: I0517 00:23:22.386531 2834 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:23:22.396295 kubelet[2834]: I0517 00:23:22.396248 2834 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:23:22.396904 kubelet[2834]: I0517 00:23:22.396868 2834 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:23:22.396979 kubelet[2834]: I0517 00:23:22.396931 2834 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:23:22.397060 kubelet[2834]: I0517 00:23:22.396943 2834 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-n-750554c5a6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 17 00:23:22.397060 kubelet[2834]: I0517 00:23:22.397038 2834 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:23:22.397060 kubelet[2834]: I0517 00:23:22.397044 2834 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:23:22.397158 kubelet[2834]: I0517 00:23:22.397097 2834 state_mem.go:36] "Initialized new in-memory state store" May 17 00:23:22.399010 kubelet[2834]: I0517 00:23:22.398976 2834 kubelet.go:408] "Attempting to sync node with API server" May 17 00:23:22.399010 kubelet[2834]: I0517 00:23:22.398986 2834 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:23:22.399010 kubelet[2834]: I0517 00:23:22.399001 2834 kubelet.go:314] "Adding apiserver pod source" May 17 00:23:22.399010 kubelet[2834]: I0517 00:23:22.399011 2834 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:23:22.401167 kubelet[2834]: W0517 00:23:22.401107 2834 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.75.202.203:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-n-750554c5a6&limit=500&resourceVersion=0": dial tcp 147.75.202.203:6443: connect: connection refused May 17 00:23:22.401167 kubelet[2834]: E0517 00:23:22.401157 2834 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.75.202.203:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-n-750554c5a6&limit=500&resourceVersion=0\": dial tcp 147.75.202.203:6443: connect: connection refused" logger="UnhandledError" May 17 00:23:22.402012 kubelet[2834]: I0517 00:23:22.401984 2834 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:23:22.402432 kubelet[2834]: W0517 00:23:22.402414 2834 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.75.202.203:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.75.202.203:6443: connect: connection refused May 17 00:23:22.402460 kubelet[2834]: E0517 00:23:22.402438 2834 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.75.202.203:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.75.202.203:6443: connect: connection refused" logger="UnhandledError" May 17 00:23:22.402495 kubelet[2834]: I0517 00:23:22.402487 2834 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:23:22.402944 kubelet[2834]: W0517 00:23:22.402906 2834 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:23:22.404431 kubelet[2834]: I0517 00:23:22.404422 2834 server.go:1274] "Started kubelet" May 17 00:23:22.404474 kubelet[2834]: I0517 00:23:22.404450 2834 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:23:22.404538 kubelet[2834]: I0517 00:23:22.404457 2834 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:23:22.404717 kubelet[2834]: I0517 00:23:22.404670 2834 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:23:22.405575 kubelet[2834]: I0517 00:23:22.405566 2834 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:23:22.405630 kubelet[2834]: I0517 00:23:22.405620 2834 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:23:22.405668 kubelet[2834]: I0517 00:23:22.405660 2834 server.go:449] "Adding debug handlers to kubelet server" May 17 00:23:22.405696 kubelet[2834]: E0517 00:23:22.405685 2834 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-750554c5a6\" not found" May 17 00:23:22.405696 kubelet[2834]: I0517 00:23:22.405689 2834 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:23:22.405758 kubelet[2834]: I0517 00:23:22.405727 2834 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:23:22.405787 kubelet[2834]: I0517 00:23:22.405771 2834 reconciler.go:26] "Reconciler: start to sync state" May 17 00:23:22.405866 kubelet[2834]: E0517 00:23:22.405848 2834 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.202.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-n-750554c5a6?timeout=10s\": dial tcp 147.75.202.203:6443: connect: connection refused" interval="200ms" May 17 00:23:22.405926 kubelet[2834]: W0517 00:23:22.405899 2834 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.75.202.203:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.202.203:6443: connect: connection refused May 17 00:23:22.405959 kubelet[2834]: E0517 00:23:22.405935 2834 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.75.202.203:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.75.202.203:6443: connect: connection refused" logger="UnhandledError" May 17 00:23:22.406244 kubelet[2834]: I0517 00:23:22.406231 2834 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:23:22.406790 kubelet[2834]: I0517 00:23:22.406782 2834 factory.go:221] Registration of the containerd container factory successfully May 17 00:23:22.406790 kubelet[2834]: I0517 00:23:22.406790 2834 factory.go:221] Registration of the systemd container factory successfully May 17 00:23:22.408310 kubelet[2834]: E0517 00:23:22.408298 2834 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:23:22.410046 kubelet[2834]: E0517 00:23:22.408432 2834 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.75.202.203:6443/api/v1/namespaces/default/events\": dial tcp 147.75.202.203:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.3-n-750554c5a6.184028b2781fcd93 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.3-n-750554c5a6,UID:ci-4081.3.3-n-750554c5a6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.3-n-750554c5a6,},FirstTimestamp:2025-05-17 00:23:22.404392339 +0000 UTC m=+0.505098490,LastTimestamp:2025-05-17 00:23:22.404392339 +0000 UTC m=+0.505098490,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.3-n-750554c5a6,}" May 17 00:23:22.414868 kubelet[2834]: I0517 00:23:22.414849 2834 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:23:22.415373 kubelet[2834]: I0517 00:23:22.415361 2834 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:23:22.415373 kubelet[2834]: I0517 00:23:22.415376 2834 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:23:22.415431 kubelet[2834]: I0517 00:23:22.415387 2834 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:23:22.415431 kubelet[2834]: E0517 00:23:22.415410 2834 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:23:22.415910 kubelet[2834]: W0517 00:23:22.415884 2834 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.75.202.203:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.202.203:6443: connect: connection refused May 17 00:23:22.415986 kubelet[2834]: E0517 00:23:22.415918 2834 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.75.202.203:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.75.202.203:6443: connect: connection refused" logger="UnhandledError" May 17 00:23:22.419708 kubelet[2834]: I0517 00:23:22.419698 2834 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:23:22.419708 kubelet[2834]: I0517 00:23:22.419707 2834 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:23:22.419766 kubelet[2834]: I0517 00:23:22.419716 2834 state_mem.go:36] "Initialized new in-memory state store" May 17 00:23:22.420673 kubelet[2834]: I0517 00:23:22.420666 2834 policy_none.go:49] "None policy: Start" May 17 00:23:22.420904 kubelet[2834]: I0517 00:23:22.420897 2834 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:23:22.420932 kubelet[2834]: I0517 00:23:22.420908 2834 state_mem.go:35] "Initializing new in-memory state store" May 17 00:23:22.423302 kubelet[2834]: I0517 00:23:22.423292 2834 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:23:22.423385 kubelet[2834]: I0517 00:23:22.423379 2834 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:23:22.423433 kubelet[2834]: I0517 00:23:22.423385 2834 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:23:22.423500 kubelet[2834]: I0517 00:23:22.423492 2834 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:23:22.423882 kubelet[2834]: E0517 00:23:22.423874 2834 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.3-n-750554c5a6\" not found" May 17 00:23:22.525303 kubelet[2834]: I0517 00:23:22.525283 2834 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.3-n-750554c5a6" May 17 00:23:22.525575 kubelet[2834]: E0517 00:23:22.525519 2834 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.75.202.203:6443/api/v1/nodes\": dial tcp 147.75.202.203:6443: connect: connection refused" node="ci-4081.3.3-n-750554c5a6" May 17 00:23:22.607021 kubelet[2834]: I0517 00:23:22.606789 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/01e7d54e3c74823580104dfc1ef182e0-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-n-750554c5a6\" (UID: \"01e7d54e3c74823580104dfc1ef182e0\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-750554c5a6" May 17 00:23:22.607021 kubelet[2834]: I0517 00:23:22.606911 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/01e7d54e3c74823580104dfc1ef182e0-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-n-750554c5a6\" (UID: \"01e7d54e3c74823580104dfc1ef182e0\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-750554c5a6" May 17 00:23:22.607021 kubelet[2834]: I0517 00:23:22.607007 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0683d167485af5989ee98d99fdec4bf2-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-n-750554c5a6\" (UID: \"0683d167485af5989ee98d99fdec4bf2\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-750554c5a6" May 17 00:23:22.607714 kubelet[2834]: I0517 00:23:22.607104 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0683d167485af5989ee98d99fdec4bf2-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-n-750554c5a6\" (UID: \"0683d167485af5989ee98d99fdec4bf2\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-750554c5a6" May 17 00:23:22.607714 kubelet[2834]: E0517 00:23:22.607089 2834 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.202.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-n-750554c5a6?timeout=10s\": dial tcp 147.75.202.203:6443: connect: connection refused" interval="400ms" May 17 00:23:22.607714 kubelet[2834]: I0517 00:23:22.607201 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8344a7c10cb027257caeb62c215dcb85-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-n-750554c5a6\" (UID: \"8344a7c10cb027257caeb62c215dcb85\") " pod="kube-system/kube-scheduler-ci-4081.3.3-n-750554c5a6" May 17 00:23:22.607714 kubelet[2834]: I0517 00:23:22.607291 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/01e7d54e3c74823580104dfc1ef182e0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-n-750554c5a6\" (UID: \"01e7d54e3c74823580104dfc1ef182e0\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-750554c5a6" May 17 00:23:22.607714 kubelet[2834]: I0517 00:23:22.607378 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0683d167485af5989ee98d99fdec4bf2-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-n-750554c5a6\" (UID: \"0683d167485af5989ee98d99fdec4bf2\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-750554c5a6" May 17 00:23:22.608345 kubelet[2834]: I0517 00:23:22.607466 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0683d167485af5989ee98d99fdec4bf2-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-n-750554c5a6\" (UID: \"0683d167485af5989ee98d99fdec4bf2\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-750554c5a6" May 17 00:23:22.608345 kubelet[2834]: I0517 00:23:22.607594 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0683d167485af5989ee98d99fdec4bf2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-n-750554c5a6\" (UID: \"0683d167485af5989ee98d99fdec4bf2\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-750554c5a6" May 17 00:23:22.730464 kubelet[2834]: I0517 00:23:22.730403 2834 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.3-n-750554c5a6" May 17 00:23:22.731246 kubelet[2834]: E0517 00:23:22.731126 2834 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.75.202.203:6443/api/v1/nodes\": dial tcp 147.75.202.203:6443: connect: connection refused" node="ci-4081.3.3-n-750554c5a6" May 17 00:23:22.821319 containerd[1952]: time="2025-05-17T00:23:22.821188982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-n-750554c5a6,Uid:01e7d54e3c74823580104dfc1ef182e0,Namespace:kube-system,Attempt:0,}" May 17 00:23:22.822189 containerd[1952]: time="2025-05-17T00:23:22.821634945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-n-750554c5a6,Uid:0683d167485af5989ee98d99fdec4bf2,Namespace:kube-system,Attempt:0,}" May 17 00:23:22.822376 containerd[1952]: time="2025-05-17T00:23:22.822332625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-n-750554c5a6,Uid:8344a7c10cb027257caeb62c215dcb85,Namespace:kube-system,Attempt:0,}" May 17 00:23:23.008776 kubelet[2834]: E0517 00:23:23.008645 2834 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.202.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-n-750554c5a6?timeout=10s\": dial tcp 147.75.202.203:6443: connect: connection refused" interval="800ms" May 17 00:23:23.132639 kubelet[2834]: I0517 00:23:23.132597 2834 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.3-n-750554c5a6" May 17 00:23:23.132791 kubelet[2834]: E0517 00:23:23.132752 2834 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.75.202.203:6443/api/v1/nodes\": dial tcp 147.75.202.203:6443: connect: connection refused" node="ci-4081.3.3-n-750554c5a6" May 17 00:23:23.213131 kubelet[2834]: W0517 00:23:23.213062 2834 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.75.202.203:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.202.203:6443: connect: connection refused May 17 00:23:23.213131 kubelet[2834]: E0517 00:23:23.213108 2834 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.75.202.203:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.75.202.203:6443: connect: connection refused" logger="UnhandledError" May 17 00:23:23.342934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2537022995.mount: Deactivated successfully. May 17 00:23:23.344031 containerd[1952]: time="2025-05-17T00:23:23.343971895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:23:23.344263 containerd[1952]: time="2025-05-17T00:23:23.344220989Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:23:23.345027 containerd[1952]: time="2025-05-17T00:23:23.344985920Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:23:23.345403 containerd[1952]: time="2025-05-17T00:23:23.345363251Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:23:23.345694 containerd[1952]: time="2025-05-17T00:23:23.345654746Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:23:23.345951 containerd[1952]: time="2025-05-17T00:23:23.345911478Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 17 00:23:23.346201 containerd[1952]: time="2025-05-17T00:23:23.346161526Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:23:23.348146 containerd[1952]: time="2025-05-17T00:23:23.348111144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:23:23.348991 containerd[1952]: time="2025-05-17T00:23:23.348934499Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 526.567531ms" May 17 00:23:23.349401 containerd[1952]: time="2025-05-17T00:23:23.349354562Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 527.580637ms" May 17 00:23:23.350842 containerd[1952]: time="2025-05-17T00:23:23.350800883Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 529.457533ms" May 17 00:23:23.396107 kubelet[2834]: W0517 00:23:23.396043 2834 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.75.202.203:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.202.203:6443: connect: connection refused May 17 00:23:23.396107 kubelet[2834]: E0517 00:23:23.396085 2834 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.75.202.203:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.75.202.203:6443: connect: connection refused" logger="UnhandledError" May 17 00:23:23.445148 containerd[1952]: time="2025-05-17T00:23:23.444927624Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:23:23.445148 containerd[1952]: time="2025-05-17T00:23:23.445129899Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:23:23.445148 containerd[1952]: time="2025-05-17T00:23:23.445137991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:23.445279 containerd[1952]: time="2025-05-17T00:23:23.445192379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:23.445279 containerd[1952]: time="2025-05-17T00:23:23.445173013Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:23:23.445279 containerd[1952]: time="2025-05-17T00:23:23.445201424Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:23:23.445279 containerd[1952]: time="2025-05-17T00:23:23.445208541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:23.445279 containerd[1952]: time="2025-05-17T00:23:23.445252162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:23.445471 containerd[1952]: time="2025-05-17T00:23:23.445443809Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:23:23.445492 containerd[1952]: time="2025-05-17T00:23:23.445472304Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:23:23.445492 containerd[1952]: time="2025-05-17T00:23:23.445480504Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:23.445542 containerd[1952]: time="2025-05-17T00:23:23.445531791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:23.491007 containerd[1952]: time="2025-05-17T00:23:23.490977863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-n-750554c5a6,Uid:8344a7c10cb027257caeb62c215dcb85,Namespace:kube-system,Attempt:0,} returns sandbox id \"41ff2c3261f3ea2d0df3cd382010fd240d10e618aa95eaf5f3e9fcd803054c5d\"" May 17 00:23:23.491079 containerd[1952]: time="2025-05-17T00:23:23.491011630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-n-750554c5a6,Uid:0683d167485af5989ee98d99fdec4bf2,Namespace:kube-system,Attempt:0,} returns sandbox id \"db708286c42d096e8a7016a6fbb83e32ee3e3d448acdd796ef9f305ad1c144bc\"" May 17 00:23:23.491079 containerd[1952]: time="2025-05-17T00:23:23.491033544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-n-750554c5a6,Uid:01e7d54e3c74823580104dfc1ef182e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"93cfd6d5f186fe1112a1a69fc947279fd5a0102f4ef729283351c01a81b0f8af\"" May 17 00:23:23.492350 containerd[1952]: time="2025-05-17T00:23:23.492337662Z" level=info msg="CreateContainer within sandbox \"41ff2c3261f3ea2d0df3cd382010fd240d10e618aa95eaf5f3e9fcd803054c5d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:23:23.492388 containerd[1952]: time="2025-05-17T00:23:23.492338627Z" level=info msg="CreateContainer within sandbox \"db708286c42d096e8a7016a6fbb83e32ee3e3d448acdd796ef9f305ad1c144bc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:23:23.492434 containerd[1952]: time="2025-05-17T00:23:23.492339029Z" level=info msg="CreateContainer within sandbox \"93cfd6d5f186fe1112a1a69fc947279fd5a0102f4ef729283351c01a81b0f8af\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:23:23.498763 containerd[1952]: time="2025-05-17T00:23:23.498710602Z" level=info msg="CreateContainer within sandbox \"db708286c42d096e8a7016a6fbb83e32ee3e3d448acdd796ef9f305ad1c144bc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b1aa6e86e035ead2cdbbcf1605a1f4e2d15938850a37e1c8ea01c1a13ab90305\"" May 17 00:23:23.498984 containerd[1952]: time="2025-05-17T00:23:23.498948252Z" level=info msg="StartContainer for \"b1aa6e86e035ead2cdbbcf1605a1f4e2d15938850a37e1c8ea01c1a13ab90305\"" May 17 00:23:23.499837 containerd[1952]: time="2025-05-17T00:23:23.499794468Z" level=info msg="CreateContainer within sandbox \"93cfd6d5f186fe1112a1a69fc947279fd5a0102f4ef729283351c01a81b0f8af\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cf4629a70e472d794d21dfda67be6e0f86d380d90e39e147e6609ab7bc205128\"" May 17 00:23:23.499990 containerd[1952]: time="2025-05-17T00:23:23.499977380Z" level=info msg="StartContainer for \"cf4629a70e472d794d21dfda67be6e0f86d380d90e39e147e6609ab7bc205128\"" May 17 00:23:23.500445 containerd[1952]: time="2025-05-17T00:23:23.500428844Z" level=info msg="CreateContainer within sandbox \"41ff2c3261f3ea2d0df3cd382010fd240d10e618aa95eaf5f3e9fcd803054c5d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9cb29c0d377ed7681b8cfbb603711efb6f29702ca6c1602fd6db0261c3ed774d\"" May 17 00:23:23.500578 containerd[1952]: time="2025-05-17T00:23:23.500568344Z" level=info msg="StartContainer for \"9cb29c0d377ed7681b8cfbb603711efb6f29702ca6c1602fd6db0261c3ed774d\"" May 17 00:23:23.507087 kubelet[2834]: W0517 00:23:23.507041 2834 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.75.202.203:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.75.202.203:6443: connect: connection refused May 17 00:23:23.507174 kubelet[2834]: E0517 00:23:23.507100 2834 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.75.202.203:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.75.202.203:6443: connect: connection refused" logger="UnhandledError" May 17 00:23:23.546415 containerd[1952]: time="2025-05-17T00:23:23.546390268Z" level=info msg="StartContainer for \"cf4629a70e472d794d21dfda67be6e0f86d380d90e39e147e6609ab7bc205128\" returns successfully" May 17 00:23:23.546415 containerd[1952]: time="2025-05-17T00:23:23.546412525Z" level=info msg="StartContainer for \"9cb29c0d377ed7681b8cfbb603711efb6f29702ca6c1602fd6db0261c3ed774d\" returns successfully" May 17 00:23:23.546570 containerd[1952]: time="2025-05-17T00:23:23.546402275Z" level=info msg="StartContainer for \"b1aa6e86e035ead2cdbbcf1605a1f4e2d15938850a37e1c8ea01c1a13ab90305\" returns successfully" May 17 00:23:23.934718 kubelet[2834]: I0517 00:23:23.934700 2834 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.3-n-750554c5a6" May 17 00:23:24.142078 kubelet[2834]: E0517 00:23:24.142052 2834 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.3-n-750554c5a6\" not found" node="ci-4081.3.3-n-750554c5a6" May 17 00:23:24.244960 kubelet[2834]: I0517 00:23:24.244943 2834 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.3-n-750554c5a6" May 17 00:23:24.244960 kubelet[2834]: E0517 00:23:24.244963 2834 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081.3.3-n-750554c5a6\": node \"ci-4081.3.3-n-750554c5a6\" not found" May 17 00:23:24.399915 kubelet[2834]: I0517 00:23:24.399860 2834 apiserver.go:52] "Watching apiserver" May 17 00:23:24.405961 kubelet[2834]: I0517 00:23:24.405893 2834 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:23:24.434666 kubelet[2834]: E0517 00:23:24.434589 2834 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.3-n-750554c5a6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.3-n-750554c5a6" May 17 00:23:24.434666 kubelet[2834]: E0517 00:23:24.434640 2834 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.3.3-n-750554c5a6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.3-n-750554c5a6" May 17 00:23:24.435007 kubelet[2834]: E0517 00:23:24.434641 2834 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.3-n-750554c5a6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.3-n-750554c5a6" May 17 00:23:25.438821 kubelet[2834]: W0517 00:23:25.438764 2834 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:23:25.439761 kubelet[2834]: W0517 00:23:25.439709 2834 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:23:25.524749 kubelet[2834]: W0517 00:23:25.524666 2834 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:23:26.835945 systemd[1]: Reloading requested from client PID 3147 ('systemctl') (unit session-11.scope)... May 17 00:23:26.835953 systemd[1]: Reloading... May 17 00:23:26.865584 zram_generator::config[3186]: No configuration found. May 17 00:23:26.937770 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:23:27.001649 systemd[1]: Reloading finished in 165 ms. May 17 00:23:27.037889 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:23:27.043377 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:23:27.043542 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:23:27.059952 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:23:27.322087 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:23:27.327124 (kubelet)[3260]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:23:27.367006 kubelet[3260]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:23:27.367006 kubelet[3260]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:23:27.367006 kubelet[3260]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:23:27.367352 kubelet[3260]: I0517 00:23:27.367077 3260 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:23:27.372612 kubelet[3260]: I0517 00:23:27.372560 3260 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:23:27.372612 kubelet[3260]: I0517 00:23:27.372578 3260 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:23:27.372840 kubelet[3260]: I0517 00:23:27.372802 3260 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:23:27.374541 kubelet[3260]: I0517 00:23:27.374521 3260 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 00:23:27.376740 kubelet[3260]: I0517 00:23:27.376697 3260 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:23:27.379273 kubelet[3260]: E0517 00:23:27.379249 3260 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:23:27.379338 kubelet[3260]: I0517 00:23:27.379275 3260 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:23:27.388888 kubelet[3260]: I0517 00:23:27.388845 3260 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:23:27.389186 kubelet[3260]: I0517 00:23:27.389149 3260 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:23:27.389293 kubelet[3260]: I0517 00:23:27.389254 3260 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:23:27.389428 kubelet[3260]: I0517 00:23:27.389273 3260 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-n-750554c5a6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 17 00:23:27.389525 kubelet[3260]: I0517 00:23:27.389438 3260 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:23:27.389525 kubelet[3260]: I0517 00:23:27.389449 3260 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:23:27.389525 kubelet[3260]: I0517 00:23:27.389473 3260 state_mem.go:36] "Initialized new in-memory state store" May 17 00:23:27.389616 kubelet[3260]: I0517 00:23:27.389557 3260 kubelet.go:408] "Attempting to sync node with API server" May 17 00:23:27.389616 kubelet[3260]: I0517 00:23:27.389569 3260 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:23:27.389616 kubelet[3260]: I0517 00:23:27.389593 3260 kubelet.go:314] "Adding apiserver pod source" May 17 00:23:27.389616 kubelet[3260]: I0517 00:23:27.389604 3260 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:23:27.390337 kubelet[3260]: I0517 00:23:27.390305 3260 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:23:27.391009 kubelet[3260]: I0517 00:23:27.390986 3260 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:23:27.392114 kubelet[3260]: I0517 00:23:27.392099 3260 server.go:1274] "Started kubelet" May 17 00:23:27.392211 kubelet[3260]: I0517 00:23:27.392173 3260 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:23:27.392273 kubelet[3260]: I0517 00:23:27.392180 3260 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:23:27.392454 kubelet[3260]: I0517 00:23:27.392436 3260 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:23:27.393209 kubelet[3260]: I0517 00:23:27.393192 3260 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:23:27.393282 kubelet[3260]: I0517 00:23:27.393216 3260 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:23:27.393339 kubelet[3260]: E0517 00:23:27.393271 3260 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-750554c5a6\" not found" May 17 00:23:27.393339 kubelet[3260]: I0517 00:23:27.393292 3260 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:23:27.393339 kubelet[3260]: I0517 00:23:27.393323 3260 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:23:27.393709 kubelet[3260]: I0517 00:23:27.393510 3260 reconciler.go:26] "Reconciler: start to sync state" May 17 00:23:27.393709 kubelet[3260]: I0517 00:23:27.393597 3260 server.go:449] "Adding debug handlers to kubelet server" May 17 00:23:27.393709 kubelet[3260]: I0517 00:23:27.393707 3260 factory.go:221] Registration of the systemd container factory successfully May 17 00:23:27.393871 kubelet[3260]: E0517 00:23:27.393805 3260 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:23:27.393871 kubelet[3260]: I0517 00:23:27.393810 3260 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:23:27.395666 kubelet[3260]: I0517 00:23:27.395645 3260 factory.go:221] Registration of the containerd container factory successfully May 17 00:23:27.402401 kubelet[3260]: I0517 00:23:27.402362 3260 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:23:27.403380 kubelet[3260]: I0517 00:23:27.403361 3260 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:23:27.403477 kubelet[3260]: I0517 00:23:27.403388 3260 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:23:27.403477 kubelet[3260]: I0517 00:23:27.403405 3260 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:23:27.403477 kubelet[3260]: E0517 00:23:27.403445 3260 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:23:27.430197 kubelet[3260]: I0517 00:23:27.430173 3260 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:23:27.430197 kubelet[3260]: I0517 00:23:27.430190 3260 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:23:27.430197 kubelet[3260]: I0517 00:23:27.430206 3260 state_mem.go:36] "Initialized new in-memory state store" May 17 00:23:27.430346 kubelet[3260]: I0517 00:23:27.430334 3260 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:23:27.430396 kubelet[3260]: I0517 00:23:27.430345 3260 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:23:27.430396 kubelet[3260]: I0517 00:23:27.430363 3260 policy_none.go:49] "None policy: Start" May 17 00:23:27.430809 kubelet[3260]: I0517 00:23:27.430797 3260 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:23:27.430854 kubelet[3260]: I0517 00:23:27.430816 3260 state_mem.go:35] "Initializing new in-memory state store" May 17 00:23:27.430933 kubelet[3260]: I0517 00:23:27.430924 3260 state_mem.go:75] "Updated machine memory state" May 17 00:23:27.431793 kubelet[3260]: I0517 00:23:27.431781 3260 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:23:27.431918 kubelet[3260]: I0517 00:23:27.431909 3260 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:23:27.431962 kubelet[3260]: I0517 00:23:27.431918 3260 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:23:27.432087 kubelet[3260]: I0517 00:23:27.432071 3260 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:23:27.513666 kubelet[3260]: W0517 00:23:27.513557 3260 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:23:27.513666 kubelet[3260]: W0517 00:23:27.513611 3260 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:23:27.513976 kubelet[3260]: W0517 00:23:27.513552 3260 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:23:27.513976 kubelet[3260]: E0517 00:23:27.513747 3260 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.3.3-n-750554c5a6\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.3-n-750554c5a6" May 17 00:23:27.513976 kubelet[3260]: E0517 00:23:27.513751 3260 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.3-n-750554c5a6\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.3-n-750554c5a6" May 17 00:23:27.513976 kubelet[3260]: E0517 00:23:27.513861 3260 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.3-n-750554c5a6\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.3-n-750554c5a6" May 17 00:23:27.539557 kubelet[3260]: I0517 00:23:27.539462 3260 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.3-n-750554c5a6" May 17 00:23:27.549440 kubelet[3260]: I0517 00:23:27.549383 3260 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.3.3-n-750554c5a6" May 17 00:23:27.549697 kubelet[3260]: I0517 00:23:27.549570 3260 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.3-n-750554c5a6" May 17 00:23:27.695670 kubelet[3260]: I0517 00:23:27.695343 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/01e7d54e3c74823580104dfc1ef182e0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-n-750554c5a6\" (UID: \"01e7d54e3c74823580104dfc1ef182e0\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-750554c5a6" May 17 00:23:27.695670 kubelet[3260]: I0517 00:23:27.695447 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0683d167485af5989ee98d99fdec4bf2-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-n-750554c5a6\" (UID: \"0683d167485af5989ee98d99fdec4bf2\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-750554c5a6" May 17 00:23:27.695670 kubelet[3260]: I0517 00:23:27.695523 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8344a7c10cb027257caeb62c215dcb85-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-n-750554c5a6\" (UID: \"8344a7c10cb027257caeb62c215dcb85\") " pod="kube-system/kube-scheduler-ci-4081.3.3-n-750554c5a6" May 17 00:23:27.695670 kubelet[3260]: I0517 00:23:27.695580 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0683d167485af5989ee98d99fdec4bf2-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-n-750554c5a6\" (UID: \"0683d167485af5989ee98d99fdec4bf2\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-750554c5a6" May 17 00:23:27.695670 kubelet[3260]: I0517 00:23:27.695631 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0683d167485af5989ee98d99fdec4bf2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-n-750554c5a6\" (UID: \"0683d167485af5989ee98d99fdec4bf2\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-750554c5a6" May 17 00:23:27.696317 kubelet[3260]: I0517 00:23:27.695679 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/01e7d54e3c74823580104dfc1ef182e0-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-n-750554c5a6\" (UID: \"01e7d54e3c74823580104dfc1ef182e0\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-750554c5a6" May 17 00:23:27.696317 kubelet[3260]: I0517 00:23:27.695725 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/01e7d54e3c74823580104dfc1ef182e0-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-n-750554c5a6\" (UID: \"01e7d54e3c74823580104dfc1ef182e0\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-750554c5a6" May 17 00:23:27.696317 kubelet[3260]: I0517 00:23:27.695774 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0683d167485af5989ee98d99fdec4bf2-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-n-750554c5a6\" (UID: \"0683d167485af5989ee98d99fdec4bf2\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-750554c5a6" May 17 00:23:27.696317 kubelet[3260]: I0517 00:23:27.695819 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0683d167485af5989ee98d99fdec4bf2-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-n-750554c5a6\" (UID: \"0683d167485af5989ee98d99fdec4bf2\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-750554c5a6" May 17 00:23:28.389990 kubelet[3260]: I0517 00:23:28.389937 3260 apiserver.go:52] "Watching apiserver" May 17 00:23:28.393495 kubelet[3260]: I0517 00:23:28.393453 3260 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:23:28.412563 kubelet[3260]: W0517 00:23:28.412542 3260 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:23:28.412653 kubelet[3260]: E0517 00:23:28.412591 3260 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.3-n-750554c5a6\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.3-n-750554c5a6" May 17 00:23:28.412653 kubelet[3260]: W0517 00:23:28.412542 3260 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:23:28.412653 kubelet[3260]: E0517 00:23:28.412643 3260 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.3-n-750554c5a6\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.3-n-750554c5a6" May 17 00:23:28.428895 kubelet[3260]: I0517 00:23:28.428827 3260 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.3-n-750554c5a6" podStartSLOduration=3.428812549 podStartE2EDuration="3.428812549s" podCreationTimestamp="2025-05-17 00:23:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:23:28.422429072 +0000 UTC m=+1.091401618" watchObservedRunningTime="2025-05-17 00:23:28.428812549 +0000 UTC m=+1.097785081" May 17 00:23:28.433445 kubelet[3260]: I0517 00:23:28.433420 3260 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.3-n-750554c5a6" podStartSLOduration=3.433408253 podStartE2EDuration="3.433408253s" podCreationTimestamp="2025-05-17 00:23:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:23:28.428931287 +0000 UTC m=+1.097903829" watchObservedRunningTime="2025-05-17 00:23:28.433408253 +0000 UTC m=+1.102380786" May 17 00:23:28.438565 kubelet[3260]: I0517 00:23:28.438512 3260 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.3-n-750554c5a6" podStartSLOduration=3.438496453 podStartE2EDuration="3.438496453s" podCreationTimestamp="2025-05-17 00:23:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:23:28.433391258 +0000 UTC m=+1.102363794" watchObservedRunningTime="2025-05-17 00:23:28.438496453 +0000 UTC m=+1.107468989" May 17 00:23:32.296347 kubelet[3260]: I0517 00:23:32.296291 3260 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:23:32.296719 containerd[1952]: time="2025-05-17T00:23:32.296541588Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:23:32.296946 kubelet[3260]: I0517 00:23:32.296715 3260 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:23:33.289158 systemd[1]: Started sshd@9-147.75.202.203:22-218.92.0.158:23170.service - OpenSSH per-connection server daemon (218.92.0.158:23170). May 17 00:23:33.333468 kubelet[3260]: I0517 00:23:33.333343 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b96fa32-e254-4f8c-b067-96b87ebdcd1e-xtables-lock\") pod \"kube-proxy-kjlfs\" (UID: \"9b96fa32-e254-4f8c-b067-96b87ebdcd1e\") " pod="kube-system/kube-proxy-kjlfs" May 17 00:23:33.333468 kubelet[3260]: I0517 00:23:33.333445 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b96fa32-e254-4f8c-b067-96b87ebdcd1e-lib-modules\") pod \"kube-proxy-kjlfs\" (UID: \"9b96fa32-e254-4f8c-b067-96b87ebdcd1e\") " pod="kube-system/kube-proxy-kjlfs" May 17 00:23:33.334467 kubelet[3260]: I0517 00:23:33.333541 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zcfb\" (UniqueName: \"kubernetes.io/projected/9b96fa32-e254-4f8c-b067-96b87ebdcd1e-kube-api-access-7zcfb\") pod \"kube-proxy-kjlfs\" (UID: \"9b96fa32-e254-4f8c-b067-96b87ebdcd1e\") " pod="kube-system/kube-proxy-kjlfs" May 17 00:23:33.334467 kubelet[3260]: I0517 00:23:33.333608 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9b96fa32-e254-4f8c-b067-96b87ebdcd1e-kube-proxy\") pod \"kube-proxy-kjlfs\" (UID: \"9b96fa32-e254-4f8c-b067-96b87ebdcd1e\") " pod="kube-system/kube-proxy-kjlfs" May 17 00:23:33.433896 kubelet[3260]: I0517 00:23:33.433846 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e11bec3b-49c2-46cc-a594-b6fdf3ade422-var-lib-calico\") pod \"tigera-operator-7c5755cdcb-gm2rb\" (UID: \"e11bec3b-49c2-46cc-a594-b6fdf3ade422\") " pod="tigera-operator/tigera-operator-7c5755cdcb-gm2rb" May 17 00:23:33.433896 kubelet[3260]: I0517 00:23:33.433881 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbrcz\" (UniqueName: \"kubernetes.io/projected/e11bec3b-49c2-46cc-a594-b6fdf3ade422-kube-api-access-bbrcz\") pod \"tigera-operator-7c5755cdcb-gm2rb\" (UID: \"e11bec3b-49c2-46cc-a594-b6fdf3ade422\") " pod="tigera-operator/tigera-operator-7c5755cdcb-gm2rb" May 17 00:23:33.592906 containerd[1952]: time="2025-05-17T00:23:33.592674514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kjlfs,Uid:9b96fa32-e254-4f8c-b067-96b87ebdcd1e,Namespace:kube-system,Attempt:0,}" May 17 00:23:33.603751 containerd[1952]: time="2025-05-17T00:23:33.603713285Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:23:33.603751 containerd[1952]: time="2025-05-17T00:23:33.603740868Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:23:33.603751 containerd[1952]: time="2025-05-17T00:23:33.603747882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:33.603850 containerd[1952]: time="2025-05-17T00:23:33.603792269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:33.630423 containerd[1952]: time="2025-05-17T00:23:33.630399232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kjlfs,Uid:9b96fa32-e254-4f8c-b067-96b87ebdcd1e,Namespace:kube-system,Attempt:0,} returns sandbox id \"acb8140d078bd04600bb9d6bb117c14f37c2b63cab2b3232301694570dc3be10\"" May 17 00:23:33.631898 containerd[1952]: time="2025-05-17T00:23:33.631858704Z" level=info msg="CreateContainer within sandbox \"acb8140d078bd04600bb9d6bb117c14f37c2b63cab2b3232301694570dc3be10\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:23:33.637147 containerd[1952]: time="2025-05-17T00:23:33.637100620Z" level=info msg="CreateContainer within sandbox \"acb8140d078bd04600bb9d6bb117c14f37c2b63cab2b3232301694570dc3be10\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"335dee12ef3c7aceb935facf71e0dd9d8c12c6397f1527d30b331efae077463c\"" May 17 00:23:33.637361 containerd[1952]: time="2025-05-17T00:23:33.637312984Z" level=info msg="StartContainer for \"335dee12ef3c7aceb935facf71e0dd9d8c12c6397f1527d30b331efae077463c\"" May 17 00:23:33.677337 containerd[1952]: time="2025-05-17T00:23:33.677312729Z" level=info msg="StartContainer for \"335dee12ef3c7aceb935facf71e0dd9d8c12c6397f1527d30b331efae077463c\" returns successfully" May 17 00:23:33.726397 containerd[1952]: time="2025-05-17T00:23:33.726334841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-gm2rb,Uid:e11bec3b-49c2-46cc-a594-b6fdf3ade422,Namespace:tigera-operator,Attempt:0,}" May 17 00:23:33.736287 containerd[1952]: time="2025-05-17T00:23:33.736242790Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:23:33.736515 containerd[1952]: time="2025-05-17T00:23:33.736288055Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:23:33.736537 containerd[1952]: time="2025-05-17T00:23:33.736514799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:33.736594 containerd[1952]: time="2025-05-17T00:23:33.736580461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:33.771428 containerd[1952]: time="2025-05-17T00:23:33.771410504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-gm2rb,Uid:e11bec3b-49c2-46cc-a594-b6fdf3ade422,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b848bea5993e4eabefe3ec0f306a6143aa2e1551221dfa164ec246a3aa27371b\"" May 17 00:23:33.772168 containerd[1952]: time="2025-05-17T00:23:33.772156731Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 17 00:23:34.366767 sshd[3601]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 00:23:34.445141 kubelet[3260]: I0517 00:23:34.445097 3260 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kjlfs" podStartSLOduration=1.44508624 podStartE2EDuration="1.44508624s" podCreationTimestamp="2025-05-17 00:23:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:23:34.44484128 +0000 UTC m=+7.113813811" watchObservedRunningTime="2025-05-17 00:23:34.44508624 +0000 UTC m=+7.114058769" May 17 00:23:35.388324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount165364889.mount: Deactivated successfully. May 17 00:23:35.655547 containerd[1952]: time="2025-05-17T00:23:35.655492553Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:35.655746 containerd[1952]: time="2025-05-17T00:23:35.655687618Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=25055451" May 17 00:23:35.656103 containerd[1952]: time="2025-05-17T00:23:35.656090393Z" level=info msg="ImageCreate event name:\"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:35.657149 containerd[1952]: time="2025-05-17T00:23:35.657136699Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:35.657634 containerd[1952]: time="2025-05-17T00:23:35.657621036Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"25051446\" in 1.885446385s" May 17 00:23:35.657660 containerd[1952]: time="2025-05-17T00:23:35.657636886Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 17 00:23:35.658479 containerd[1952]: time="2025-05-17T00:23:35.658466518Z" level=info msg="CreateContainer within sandbox \"b848bea5993e4eabefe3ec0f306a6143aa2e1551221dfa164ec246a3aa27371b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 17 00:23:35.661956 containerd[1952]: time="2025-05-17T00:23:35.661918307Z" level=info msg="CreateContainer within sandbox \"b848bea5993e4eabefe3ec0f306a6143aa2e1551221dfa164ec246a3aa27371b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e2d39dae8416593e1b905a3cee1f6826efd766c087207d603f4aea05539aebe9\"" May 17 00:23:35.662159 containerd[1952]: time="2025-05-17T00:23:35.662143895Z" level=info msg="StartContainer for \"e2d39dae8416593e1b905a3cee1f6826efd766c087207d603f4aea05539aebe9\"" May 17 00:23:35.688288 containerd[1952]: time="2025-05-17T00:23:35.688265216Z" level=info msg="StartContainer for \"e2d39dae8416593e1b905a3cee1f6826efd766c087207d603f4aea05539aebe9\" returns successfully" May 17 00:23:36.228858 sshd[3341]: PAM: Permission denied for root from 218.92.0.158 May 17 00:23:36.469422 kubelet[3260]: I0517 00:23:36.469295 3260 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7c5755cdcb-gm2rb" podStartSLOduration=1.5832366009999999 podStartE2EDuration="3.469258772s" podCreationTimestamp="2025-05-17 00:23:33 +0000 UTC" firstStartedPulling="2025-05-17 00:23:33.771964961 +0000 UTC m=+6.440937493" lastFinishedPulling="2025-05-17 00:23:35.657987131 +0000 UTC m=+8.326959664" observedRunningTime="2025-05-17 00:23:36.469062782 +0000 UTC m=+9.138035388" watchObservedRunningTime="2025-05-17 00:23:36.469258772 +0000 UTC m=+9.138231357" May 17 00:23:36.521498 sshd[3656]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 00:23:38.659531 sshd[3341]: PAM: Permission denied for root from 218.92.0.158 May 17 00:23:38.952381 sshd[3739]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 00:23:40.205143 sudo[2263]: pam_unix(sudo:session): session closed for user root May 17 00:23:40.206090 sshd[2257]: pam_unix(sshd:session): session closed for user core May 17 00:23:40.208470 systemd[1]: sshd@8-147.75.202.203:22-147.75.109.163:43802.service: Deactivated successfully. May 17 00:23:40.209900 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:23:40.210589 systemd-logind[1937]: Session 11 logged out. Waiting for processes to exit. May 17 00:23:40.211204 systemd-logind[1937]: Removed session 11. May 17 00:23:41.365795 sshd[3341]: PAM: Permission denied for root from 218.92.0.158 May 17 00:23:41.510900 sshd[3341]: Received disconnect from 218.92.0.158 port 23170:11: [preauth] May 17 00:23:41.510900 sshd[3341]: Disconnected from authenticating user root 218.92.0.158 port 23170 [preauth] May 17 00:23:41.512041 systemd[1]: sshd@9-147.75.202.203:22-218.92.0.158:23170.service: Deactivated successfully. May 17 00:23:42.183555 update_engine[1942]: I20250517 00:23:42.183521 1942 update_attempter.cc:509] Updating boot flags... May 17 00:23:42.213514 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (3794) May 17 00:23:42.241516 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (3796) May 17 00:23:42.261514 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (3796) May 17 00:23:42.491378 kubelet[3260]: I0517 00:23:42.491351 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a6d5e37a-1e56-4aee-9c17-6137337a639f-typha-certs\") pod \"calico-typha-7b9f5c7694-db75h\" (UID: \"a6d5e37a-1e56-4aee-9c17-6137337a639f\") " pod="calico-system/calico-typha-7b9f5c7694-db75h" May 17 00:23:42.491722 kubelet[3260]: I0517 00:23:42.491387 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6d5e37a-1e56-4aee-9c17-6137337a639f-tigera-ca-bundle\") pod \"calico-typha-7b9f5c7694-db75h\" (UID: \"a6d5e37a-1e56-4aee-9c17-6137337a639f\") " pod="calico-system/calico-typha-7b9f5c7694-db75h" May 17 00:23:42.491722 kubelet[3260]: I0517 00:23:42.491410 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44kdl\" (UniqueName: \"kubernetes.io/projected/a6d5e37a-1e56-4aee-9c17-6137337a639f-kube-api-access-44kdl\") pod \"calico-typha-7b9f5c7694-db75h\" (UID: \"a6d5e37a-1e56-4aee-9c17-6137337a639f\") " pod="calico-system/calico-typha-7b9f5c7694-db75h" May 17 00:23:42.777158 containerd[1952]: time="2025-05-17T00:23:42.776966546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b9f5c7694-db75h,Uid:a6d5e37a-1e56-4aee-9c17-6137337a639f,Namespace:calico-system,Attempt:0,}" May 17 00:23:42.787437 containerd[1952]: time="2025-05-17T00:23:42.787393613Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:23:42.787634 containerd[1952]: time="2025-05-17T00:23:42.787616428Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:23:42.787658 containerd[1952]: time="2025-05-17T00:23:42.787632342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:42.787693 containerd[1952]: time="2025-05-17T00:23:42.787682209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:42.821344 containerd[1952]: time="2025-05-17T00:23:42.821322950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b9f5c7694-db75h,Uid:a6d5e37a-1e56-4aee-9c17-6137337a639f,Namespace:calico-system,Attempt:0,} returns sandbox id \"aa82c55e6e5cff945e379081341b11595d689f159cdb17c5ae9f92a9e41fa108\"" May 17 00:23:42.822108 containerd[1952]: time="2025-05-17T00:23:42.822094333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 17 00:23:42.893127 kubelet[3260]: I0517 00:23:42.893010 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/97319639-879f-4787-beca-9b1e6540e2b6-var-run-calico\") pod \"calico-node-5gqjp\" (UID: \"97319639-879f-4787-beca-9b1e6540e2b6\") " pod="calico-system/calico-node-5gqjp" May 17 00:23:42.893127 kubelet[3260]: I0517 00:23:42.893127 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/97319639-879f-4787-beca-9b1e6540e2b6-policysync\") pod \"calico-node-5gqjp\" (UID: \"97319639-879f-4787-beca-9b1e6540e2b6\") " pod="calico-system/calico-node-5gqjp" May 17 00:23:42.893496 kubelet[3260]: I0517 00:23:42.893197 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/97319639-879f-4787-beca-9b1e6540e2b6-xtables-lock\") pod \"calico-node-5gqjp\" (UID: \"97319639-879f-4787-beca-9b1e6540e2b6\") " pod="calico-system/calico-node-5gqjp" May 17 00:23:42.893496 kubelet[3260]: I0517 00:23:42.893258 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97319639-879f-4787-beca-9b1e6540e2b6-lib-modules\") pod \"calico-node-5gqjp\" (UID: \"97319639-879f-4787-beca-9b1e6540e2b6\") " pod="calico-system/calico-node-5gqjp" May 17 00:23:42.893496 kubelet[3260]: I0517 00:23:42.893331 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/97319639-879f-4787-beca-9b1e6540e2b6-node-certs\") pod \"calico-node-5gqjp\" (UID: \"97319639-879f-4787-beca-9b1e6540e2b6\") " pod="calico-system/calico-node-5gqjp" May 17 00:23:42.893496 kubelet[3260]: I0517 00:23:42.893396 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2kcd\" (UniqueName: \"kubernetes.io/projected/97319639-879f-4787-beca-9b1e6540e2b6-kube-api-access-d2kcd\") pod \"calico-node-5gqjp\" (UID: \"97319639-879f-4787-beca-9b1e6540e2b6\") " pod="calico-system/calico-node-5gqjp" May 17 00:23:42.893496 kubelet[3260]: I0517 00:23:42.893466 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/97319639-879f-4787-beca-9b1e6540e2b6-cni-net-dir\") pod \"calico-node-5gqjp\" (UID: \"97319639-879f-4787-beca-9b1e6540e2b6\") " pod="calico-system/calico-node-5gqjp" May 17 00:23:42.894013 kubelet[3260]: I0517 00:23:42.893543 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/97319639-879f-4787-beca-9b1e6540e2b6-var-lib-calico\") pod \"calico-node-5gqjp\" (UID: \"97319639-879f-4787-beca-9b1e6540e2b6\") " pod="calico-system/calico-node-5gqjp" May 17 00:23:42.894013 kubelet[3260]: I0517 00:23:42.893605 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/97319639-879f-4787-beca-9b1e6540e2b6-flexvol-driver-host\") pod \"calico-node-5gqjp\" (UID: \"97319639-879f-4787-beca-9b1e6540e2b6\") " pod="calico-system/calico-node-5gqjp" May 17 00:23:42.894013 kubelet[3260]: I0517 00:23:42.893667 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/97319639-879f-4787-beca-9b1e6540e2b6-cni-bin-dir\") pod \"calico-node-5gqjp\" (UID: \"97319639-879f-4787-beca-9b1e6540e2b6\") " pod="calico-system/calico-node-5gqjp" May 17 00:23:42.894013 kubelet[3260]: I0517 00:23:42.893731 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/97319639-879f-4787-beca-9b1e6540e2b6-cni-log-dir\") pod \"calico-node-5gqjp\" (UID: \"97319639-879f-4787-beca-9b1e6540e2b6\") " pod="calico-system/calico-node-5gqjp" May 17 00:23:42.894013 kubelet[3260]: I0517 00:23:42.893824 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97319639-879f-4787-beca-9b1e6540e2b6-tigera-ca-bundle\") pod \"calico-node-5gqjp\" (UID: \"97319639-879f-4787-beca-9b1e6540e2b6\") " pod="calico-system/calico-node-5gqjp" May 17 00:23:42.997903 kubelet[3260]: E0517 00:23:42.997812 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:42.997903 kubelet[3260]: W0517 00:23:42.997862 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:42.997903 kubelet[3260]: E0517 00:23:42.997908 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.002786 kubelet[3260]: E0517 00:23:43.002699 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.002786 kubelet[3260]: W0517 00:23:43.002757 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.003170 kubelet[3260]: E0517 00:23:43.002809 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.015400 kubelet[3260]: E0517 00:23:43.015346 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.015400 kubelet[3260]: W0517 00:23:43.015390 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.015768 kubelet[3260]: E0517 00:23:43.015437 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.172089 containerd[1952]: time="2025-05-17T00:23:43.171859131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5gqjp,Uid:97319639-879f-4787-beca-9b1e6540e2b6,Namespace:calico-system,Attempt:0,}" May 17 00:23:43.174473 kubelet[3260]: E0517 00:23:43.174439 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hkm4q" podUID="403f708a-0226-4cfa-98fa-55326c364f55" May 17 00:23:43.184326 containerd[1952]: time="2025-05-17T00:23:43.184265683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:23:43.184326 containerd[1952]: time="2025-05-17T00:23:43.184314386Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:23:43.184326 containerd[1952]: time="2025-05-17T00:23:43.184321572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:43.184462 containerd[1952]: time="2025-05-17T00:23:43.184369340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:43.189701 kubelet[3260]: E0517 00:23:43.189685 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.189701 kubelet[3260]: W0517 00:23:43.189698 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.189817 kubelet[3260]: E0517 00:23:43.189713 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.189842 kubelet[3260]: E0517 00:23:43.189834 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.189842 kubelet[3260]: W0517 00:23:43.189840 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.189877 kubelet[3260]: E0517 00:23:43.189845 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.189927 kubelet[3260]: E0517 00:23:43.189922 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.189948 kubelet[3260]: W0517 00:23:43.189927 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.189948 kubelet[3260]: E0517 00:23:43.189933 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.190052 kubelet[3260]: E0517 00:23:43.190046 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.190073 kubelet[3260]: W0517 00:23:43.190052 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.190073 kubelet[3260]: E0517 00:23:43.190057 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.190140 kubelet[3260]: E0517 00:23:43.190136 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.190161 kubelet[3260]: W0517 00:23:43.190140 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.190161 kubelet[3260]: E0517 00:23:43.190145 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.190218 kubelet[3260]: E0517 00:23:43.190213 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.190236 kubelet[3260]: W0517 00:23:43.190218 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.190236 kubelet[3260]: E0517 00:23:43.190222 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.190295 kubelet[3260]: E0517 00:23:43.190290 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.190315 kubelet[3260]: W0517 00:23:43.190295 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.190315 kubelet[3260]: E0517 00:23:43.190299 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.190383 kubelet[3260]: E0517 00:23:43.190377 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.190401 kubelet[3260]: W0517 00:23:43.190384 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.190401 kubelet[3260]: E0517 00:23:43.190392 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.190485 kubelet[3260]: E0517 00:23:43.190480 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.190515 kubelet[3260]: W0517 00:23:43.190485 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.190515 kubelet[3260]: E0517 00:23:43.190490 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.190576 kubelet[3260]: E0517 00:23:43.190564 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.190576 kubelet[3260]: W0517 00:23:43.190569 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.190576 kubelet[3260]: E0517 00:23:43.190574 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.190655 kubelet[3260]: E0517 00:23:43.190638 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.190655 kubelet[3260]: W0517 00:23:43.190642 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.190655 kubelet[3260]: E0517 00:23:43.190646 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.190735 kubelet[3260]: E0517 00:23:43.190728 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.190735 kubelet[3260]: W0517 00:23:43.190732 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.190795 kubelet[3260]: E0517 00:23:43.190737 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.190828 kubelet[3260]: E0517 00:23:43.190811 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.190828 kubelet[3260]: W0517 00:23:43.190815 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.190828 kubelet[3260]: E0517 00:23:43.190820 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.190923 kubelet[3260]: E0517 00:23:43.190886 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.190923 kubelet[3260]: W0517 00:23:43.190890 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.190923 kubelet[3260]: E0517 00:23:43.190894 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.191004 kubelet[3260]: E0517 00:23:43.190963 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.191004 kubelet[3260]: W0517 00:23:43.190967 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.191004 kubelet[3260]: E0517 00:23:43.190972 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.191096 kubelet[3260]: E0517 00:23:43.191038 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.191096 kubelet[3260]: W0517 00:23:43.191042 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.191096 kubelet[3260]: E0517 00:23:43.191046 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.191184 kubelet[3260]: E0517 00:23:43.191116 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.191184 kubelet[3260]: W0517 00:23:43.191120 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.191184 kubelet[3260]: E0517 00:23:43.191124 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.191269 kubelet[3260]: E0517 00:23:43.191190 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.191269 kubelet[3260]: W0517 00:23:43.191194 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.191269 kubelet[3260]: E0517 00:23:43.191198 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.191269 kubelet[3260]: E0517 00:23:43.191263 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.191269 kubelet[3260]: W0517 00:23:43.191267 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.191395 kubelet[3260]: E0517 00:23:43.191272 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.191395 kubelet[3260]: E0517 00:23:43.191338 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.191395 kubelet[3260]: W0517 00:23:43.191342 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.191395 kubelet[3260]: E0517 00:23:43.191347 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.200929 kubelet[3260]: E0517 00:23:43.200602 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.200929 kubelet[3260]: W0517 00:23:43.200628 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.200929 kubelet[3260]: E0517 00:23:43.200651 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.200929 kubelet[3260]: I0517 00:23:43.200679 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/403f708a-0226-4cfa-98fa-55326c364f55-varrun\") pod \"csi-node-driver-hkm4q\" (UID: \"403f708a-0226-4cfa-98fa-55326c364f55\") " pod="calico-system/csi-node-driver-hkm4q" May 17 00:23:43.201087 kubelet[3260]: E0517 00:23:43.201025 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.201087 kubelet[3260]: W0517 00:23:43.201035 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.201087 kubelet[3260]: E0517 00:23:43.201046 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.201087 kubelet[3260]: I0517 00:23:43.201059 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/403f708a-0226-4cfa-98fa-55326c364f55-socket-dir\") pod \"csi-node-driver-hkm4q\" (UID: \"403f708a-0226-4cfa-98fa-55326c364f55\") " pod="calico-system/csi-node-driver-hkm4q" May 17 00:23:43.201212 kubelet[3260]: E0517 00:23:43.201202 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.201212 kubelet[3260]: W0517 00:23:43.201211 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.201262 kubelet[3260]: E0517 00:23:43.201220 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.201322 kubelet[3260]: E0517 00:23:43.201315 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.201322 kubelet[3260]: W0517 00:23:43.201320 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.201380 kubelet[3260]: E0517 00:23:43.201325 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.201421 kubelet[3260]: E0517 00:23:43.201416 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.201421 kubelet[3260]: W0517 00:23:43.201421 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.201483 kubelet[3260]: E0517 00:23:43.201426 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.201483 kubelet[3260]: I0517 00:23:43.201438 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6fld\" (UniqueName: \"kubernetes.io/projected/403f708a-0226-4cfa-98fa-55326c364f55-kube-api-access-t6fld\") pod \"csi-node-driver-hkm4q\" (UID: \"403f708a-0226-4cfa-98fa-55326c364f55\") " pod="calico-system/csi-node-driver-hkm4q" May 17 00:23:43.201583 kubelet[3260]: E0517 00:23:43.201575 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.201583 kubelet[3260]: W0517 00:23:43.201582 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.201648 kubelet[3260]: E0517 00:23:43.201592 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.201697 kubelet[3260]: E0517 00:23:43.201690 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.201728 kubelet[3260]: W0517 00:23:43.201697 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.201728 kubelet[3260]: E0517 00:23:43.201706 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.201813 kubelet[3260]: E0517 00:23:43.201807 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.201846 kubelet[3260]: W0517 00:23:43.201813 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.201846 kubelet[3260]: E0517 00:23:43.201822 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.201846 kubelet[3260]: I0517 00:23:43.201838 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/403f708a-0226-4cfa-98fa-55326c364f55-registration-dir\") pod \"csi-node-driver-hkm4q\" (UID: \"403f708a-0226-4cfa-98fa-55326c364f55\") " pod="calico-system/csi-node-driver-hkm4q" May 17 00:23:43.201947 kubelet[3260]: E0517 00:23:43.201940 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.201981 kubelet[3260]: W0517 00:23:43.201946 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.201981 kubelet[3260]: E0517 00:23:43.201956 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.201981 kubelet[3260]: I0517 00:23:43.201970 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/403f708a-0226-4cfa-98fa-55326c364f55-kubelet-dir\") pod \"csi-node-driver-hkm4q\" (UID: \"403f708a-0226-4cfa-98fa-55326c364f55\") " pod="calico-system/csi-node-driver-hkm4q" May 17 00:23:43.202094 kubelet[3260]: E0517 00:23:43.202087 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.202127 kubelet[3260]: W0517 00:23:43.202093 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.202127 kubelet[3260]: E0517 00:23:43.202102 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.202218 kubelet[3260]: E0517 00:23:43.202212 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.202218 kubelet[3260]: W0517 00:23:43.202218 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.202279 kubelet[3260]: E0517 00:23:43.202226 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.202327 kubelet[3260]: E0517 00:23:43.202321 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.202357 kubelet[3260]: W0517 00:23:43.202327 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.202357 kubelet[3260]: E0517 00:23:43.202335 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.202427 kubelet[3260]: E0517 00:23:43.202421 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.202467 kubelet[3260]: W0517 00:23:43.202427 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.202467 kubelet[3260]: E0517 00:23:43.202438 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.202563 kubelet[3260]: E0517 00:23:43.202557 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.202597 kubelet[3260]: W0517 00:23:43.202562 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.202597 kubelet[3260]: E0517 00:23:43.202570 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.202714 kubelet[3260]: E0517 00:23:43.202708 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.202741 kubelet[3260]: W0517 00:23:43.202714 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.202741 kubelet[3260]: E0517 00:23:43.202721 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.210480 containerd[1952]: time="2025-05-17T00:23:43.210459498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5gqjp,Uid:97319639-879f-4787-beca-9b1e6540e2b6,Namespace:calico-system,Attempt:0,} returns sandbox id \"ee7928812c6864f287dacffe28954b4a716bb9a359b567e72ec86c2d3cc9e872\"" May 17 00:23:43.302978 kubelet[3260]: E0517 00:23:43.302951 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.302978 kubelet[3260]: W0517 00:23:43.302973 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.303171 kubelet[3260]: E0517 00:23:43.302998 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.303335 kubelet[3260]: E0517 00:23:43.303315 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.303335 kubelet[3260]: W0517 00:23:43.303332 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.303487 kubelet[3260]: E0517 00:23:43.303356 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.303679 kubelet[3260]: E0517 00:23:43.303635 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.303679 kubelet[3260]: W0517 00:23:43.303650 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.303679 kubelet[3260]: E0517 00:23:43.303670 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.303917 kubelet[3260]: E0517 00:23:43.303896 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.303917 kubelet[3260]: W0517 00:23:43.303911 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.304043 kubelet[3260]: E0517 00:23:43.303937 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.304220 kubelet[3260]: E0517 00:23:43.304205 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.304261 kubelet[3260]: W0517 00:23:43.304224 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.304261 kubelet[3260]: E0517 00:23:43.304248 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.304521 kubelet[3260]: E0517 00:23:43.304493 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.304521 kubelet[3260]: W0517 00:23:43.304516 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.304626 kubelet[3260]: E0517 00:23:43.304534 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.304813 kubelet[3260]: E0517 00:23:43.304770 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.304813 kubelet[3260]: W0517 00:23:43.304784 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.304975 kubelet[3260]: E0517 00:23:43.304830 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.305057 kubelet[3260]: E0517 00:23:43.305042 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.305102 kubelet[3260]: W0517 00:23:43.305057 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.305153 kubelet[3260]: E0517 00:23:43.305107 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.305310 kubelet[3260]: E0517 00:23:43.305299 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.305348 kubelet[3260]: W0517 00:23:43.305311 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.305348 kubelet[3260]: E0517 00:23:43.305325 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.305501 kubelet[3260]: E0517 00:23:43.305491 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.305550 kubelet[3260]: W0517 00:23:43.305501 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.305550 kubelet[3260]: E0517 00:23:43.305524 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.305766 kubelet[3260]: E0517 00:23:43.305726 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.305766 kubelet[3260]: W0517 00:23:43.305735 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.305766 kubelet[3260]: E0517 00:23:43.305748 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.305991 kubelet[3260]: E0517 00:23:43.305949 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.305991 kubelet[3260]: W0517 00:23:43.305959 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.305991 kubelet[3260]: E0517 00:23:43.305972 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.306259 kubelet[3260]: E0517 00:23:43.306223 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.306259 kubelet[3260]: W0517 00:23:43.306238 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.306259 kubelet[3260]: E0517 00:23:43.306256 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.306469 kubelet[3260]: E0517 00:23:43.306458 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.306536 kubelet[3260]: W0517 00:23:43.306469 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.306536 kubelet[3260]: E0517 00:23:43.306492 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.306653 kubelet[3260]: E0517 00:23:43.306642 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.306704 kubelet[3260]: W0517 00:23:43.306653 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.306704 kubelet[3260]: E0517 00:23:43.306674 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.306812 kubelet[3260]: E0517 00:23:43.306801 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.306859 kubelet[3260]: W0517 00:23:43.306812 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.306859 kubelet[3260]: E0517 00:23:43.306832 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.306995 kubelet[3260]: E0517 00:23:43.306985 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.307033 kubelet[3260]: W0517 00:23:43.306996 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.307033 kubelet[3260]: E0517 00:23:43.307009 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.307172 kubelet[3260]: E0517 00:23:43.307163 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.307212 kubelet[3260]: W0517 00:23:43.307173 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.307212 kubelet[3260]: E0517 00:23:43.307186 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.307361 kubelet[3260]: E0517 00:23:43.307352 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.307403 kubelet[3260]: W0517 00:23:43.307362 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.307403 kubelet[3260]: E0517 00:23:43.307374 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.307586 kubelet[3260]: E0517 00:23:43.307574 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.307586 kubelet[3260]: W0517 00:23:43.307585 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.307669 kubelet[3260]: E0517 00:23:43.307597 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.307764 kubelet[3260]: E0517 00:23:43.307754 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.307802 kubelet[3260]: W0517 00:23:43.307766 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.307802 kubelet[3260]: E0517 00:23:43.307779 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.307945 kubelet[3260]: E0517 00:23:43.307935 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.307985 kubelet[3260]: W0517 00:23:43.307945 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.308024 kubelet[3260]: E0517 00:23:43.307992 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.308156 kubelet[3260]: E0517 00:23:43.308146 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.308197 kubelet[3260]: W0517 00:23:43.308157 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.308197 kubelet[3260]: E0517 00:23:43.308169 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.308369 kubelet[3260]: E0517 00:23:43.308358 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.308369 kubelet[3260]: W0517 00:23:43.308368 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.308441 kubelet[3260]: E0517 00:23:43.308380 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.308576 kubelet[3260]: E0517 00:23:43.308565 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.308616 kubelet[3260]: W0517 00:23:43.308575 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.308616 kubelet[3260]: E0517 00:23:43.308585 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:43.316517 kubelet[3260]: E0517 00:23:43.316469 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:43.316517 kubelet[3260]: W0517 00:23:43.316484 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:43.316517 kubelet[3260]: E0517 00:23:43.316501 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:44.331013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1452107093.mount: Deactivated successfully. May 17 00:23:44.635523 containerd[1952]: time="2025-05-17T00:23:44.635461685Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:44.635744 containerd[1952]: time="2025-05-17T00:23:44.635647374Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=35158669" May 17 00:23:44.636040 containerd[1952]: time="2025-05-17T00:23:44.636026025Z" level=info msg="ImageCreate event name:\"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:44.637041 containerd[1952]: time="2025-05-17T00:23:44.637029444Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:44.637457 containerd[1952]: time="2025-05-17T00:23:44.637444974Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"35158523\" in 1.81533147s" May 17 00:23:44.637480 containerd[1952]: time="2025-05-17T00:23:44.637461692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 17 00:23:44.637956 containerd[1952]: time="2025-05-17T00:23:44.637944594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 17 00:23:44.641011 containerd[1952]: time="2025-05-17T00:23:44.640992557Z" level=info msg="CreateContainer within sandbox \"aa82c55e6e5cff945e379081341b11595d689f159cdb17c5ae9f92a9e41fa108\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 17 00:23:44.644865 containerd[1952]: time="2025-05-17T00:23:44.644846618Z" level=info msg="CreateContainer within sandbox \"aa82c55e6e5cff945e379081341b11595d689f159cdb17c5ae9f92a9e41fa108\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"42f5f9cd96ecd6168fba59197e41c85f851d589aab034882fe345998be0f9ddb\"" May 17 00:23:44.645136 containerd[1952]: time="2025-05-17T00:23:44.645119012Z" level=info msg="StartContainer for \"42f5f9cd96ecd6168fba59197e41c85f851d589aab034882fe345998be0f9ddb\"" May 17 00:23:44.693968 containerd[1952]: time="2025-05-17T00:23:44.693946256Z" level=info msg="StartContainer for \"42f5f9cd96ecd6168fba59197e41c85f851d589aab034882fe345998be0f9ddb\" returns successfully" May 17 00:23:45.404961 kubelet[3260]: E0517 00:23:45.404838 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hkm4q" podUID="403f708a-0226-4cfa-98fa-55326c364f55" May 17 00:23:45.463900 kubelet[3260]: I0517 00:23:45.463864 3260 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7b9f5c7694-db75h" podStartSLOduration=1.647948594 podStartE2EDuration="3.463852457s" podCreationTimestamp="2025-05-17 00:23:42 +0000 UTC" firstStartedPulling="2025-05-17 00:23:42.821958113 +0000 UTC m=+15.490930644" lastFinishedPulling="2025-05-17 00:23:44.637861974 +0000 UTC m=+17.306834507" observedRunningTime="2025-05-17 00:23:45.463672853 +0000 UTC m=+18.132645393" watchObservedRunningTime="2025-05-17 00:23:45.463852457 +0000 UTC m=+18.132824989" May 17 00:23:45.511544 kubelet[3260]: E0517 00:23:45.511457 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:45.511544 kubelet[3260]: W0517 00:23:45.511534 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:45.511996 kubelet[3260]: E0517 00:23:45.511596 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:45.512418 kubelet[3260]: E0517 00:23:45.512368 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:45.512418 kubelet[3260]: W0517 00:23:45.512412 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:45.512878 kubelet[3260]: E0517 00:23:45.512463 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:45.513141 kubelet[3260]: E0517 00:23:45.513101 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:45.513334 kubelet[3260]: W0517 00:23:45.513137 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:45.513334 kubelet[3260]: E0517 00:23:45.513184 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:45.514063 kubelet[3260]: E0517 00:23:45.514013 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:45.514063 kubelet[3260]: W0517 00:23:45.514057 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:45.514467 kubelet[3260]: E0517 00:23:45.514107 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:45.514801 kubelet[3260]: E0517 00:23:45.514759 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:45.514801 kubelet[3260]: W0517 00:23:45.514796 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:45.515146 kubelet[3260]: E0517 00:23:45.514842 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:45.515533 kubelet[3260]: E0517 00:23:45.515457 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:45.515533 kubelet[3260]: W0517 00:23:45.515499 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:45.515930 kubelet[3260]: E0517 00:23:45.515568 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:45.516216 kubelet[3260]: E0517 00:23:45.516176 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:45.516216 kubelet[3260]: W0517 00:23:45.516212 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:45.516585 kubelet[3260]: E0517 00:23:45.516256 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:45.517002 kubelet[3260]: E0517 00:23:45.516948 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:45.517002 kubelet[3260]: W0517 00:23:45.516990 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:45.517417 kubelet[3260]: E0517 00:23:45.517040 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:45.517793 kubelet[3260]: E0517 00:23:45.517747 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:45.517793 kubelet[3260]: W0517 00:23:45.517785 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:45.518134 kubelet[3260]: E0517 00:23:45.517832 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:45.518448 kubelet[3260]: E0517 00:23:45.518406 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:45.518667 kubelet[3260]: W0517 00:23:45.518473 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:45.518667 kubelet[3260]: E0517 00:23:45.518537 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:45.519132 kubelet[3260]: E0517 00:23:45.519086 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:45.519132 kubelet[3260]: W0517 00:23:45.519124 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:45.519471 kubelet[3260]: E0517 00:23:45.519167 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:45.519864 kubelet[3260]: E0517 00:23:45.519815 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:45.519864 kubelet[3260]: W0517 00:23:45.519860 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:45.520266 kubelet[3260]: E0517 00:23:45.519910 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:45.520598 kubelet[3260]: E0517 00:23:45.520552 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:45.520598 kubelet[3260]: W0517 00:23:45.520592 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:45.520961 kubelet[3260]: E0517 00:23:45.520638 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:45.521258 kubelet[3260]: E0517 00:23:45.521213 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:45.521258 kubelet[3260]: W0517 00:23:45.521250 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:45.521619 kubelet[3260]: E0517 00:23:45.521295 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:45.521925 kubelet[3260]: E0517 00:23:45.521883 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:45.521925 kubelet[3260]: W0517 00:23:45.521917 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:45.522256 kubelet[3260]: E0517 00:23:45.521960 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:45.525479 kubelet[3260]: E0517 00:23:45.525433 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:45.525479 kubelet[3260]: W0517 00:23:45.525471 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:45.525829 kubelet[3260]: E0517 00:23:45.525533 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:45.526206 kubelet[3260]: E0517 00:23:45.526130 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:45.526206 kubelet[3260]: W0517 00:23:45.526166 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:45.526206 kubelet[3260]: E0517 00:23:45.526210 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:45.526841 kubelet[3260]: E0517 00:23:45.526795 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:45.527022 kubelet[3260]: W0517 00:23:45.526839 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:45.527022 kubelet[3260]: E0517 00:23:45.526893 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:45.527497 kubelet[3260]: E0517 00:23:45.527458 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:45.527497 kubelet[3260]: W0517 00:23:45.527491 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:45.527843 kubelet[3260]: E0517 00:23:45.527556 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:45.528158 kubelet[3260]: E0517 00:23:45.528113 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:45.528158 kubelet[3260]: W0517 00:23:45.528152 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:45.528539 kubelet[3260]: E0517 00:23:45.528260 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:45.528764 kubelet[3260]: E0517 00:23:45.528725 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:45.528764 kubelet[3260]: W0517 00:23:45.528757 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:45.529081 kubelet[3260]: E0517 00:23:45.528878 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:45.529302 kubelet[3260]: E0517 00:23:45.529266 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:45.529302 kubelet[3260]: W0517 00:23:45.529297 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:45.529665 kubelet[3260]: E0517 00:23:45.529422 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:45.530024 kubelet[3260]: E0517 00:23:45.529979 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:45.530024 kubelet[3260]: W0517 00:23:45.530019 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:45.530395 kubelet[3260]: E0517 00:23:45.530074 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:45.530871 kubelet[3260]: E0517 00:23:45.530793 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:45.530871 kubelet[3260]: W0517 00:23:45.530828 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:45.530871 kubelet[3260]: E0517 00:23:45.530872 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:45.531455 kubelet[3260]: E0517 00:23:45.531403 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:45.531455 kubelet[3260]: W0517 00:23:45.531438 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:45.531732 kubelet[3260]: E0517 00:23:45.531565 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:45.532099 kubelet[3260]: E0517 00:23:45.532021 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:45.532099 kubelet[3260]: W0517 00:23:45.532057 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:45.532403 kubelet[3260]: E0517 00:23:45.532142 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:45.532682 kubelet[3260]: E0517 00:23:45.532610 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:45.532682 kubelet[3260]: W0517 00:23:45.532637 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:45.532987 kubelet[3260]: E0517 00:23:45.532715 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:45.533228 kubelet[3260]: E0517 00:23:45.533151 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:45.533228 kubelet[3260]: W0517 00:23:45.533185 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:45.533577 kubelet[3260]: E0517 00:23:45.533265 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:45.533797 kubelet[3260]: E0517 00:23:45.533714 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:45.533797 kubelet[3260]: W0517 00:23:45.533749 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:45.533797 kubelet[3260]: E0517 00:23:45.533790 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:45.534480 kubelet[3260]: E0517 00:23:45.534439 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:45.534690 kubelet[3260]: W0517 00:23:45.534477 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:45.534690 kubelet[3260]: E0517 00:23:45.534574 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:45.535259 kubelet[3260]: E0517 00:23:45.535202 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:45.535259 kubelet[3260]: W0517 00:23:45.535237 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:45.535614 kubelet[3260]: E0517 00:23:45.535278 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:45.536006 kubelet[3260]: E0517 00:23:45.535959 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:45.536006 kubelet[3260]: W0517 00:23:45.535999 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:45.536413 kubelet[3260]: E0517 00:23:45.536059 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:45.536674 kubelet[3260]: E0517 00:23:45.536638 3260 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:45.536674 kubelet[3260]: W0517 00:23:45.536669 3260 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:45.536928 kubelet[3260]: E0517 00:23:45.536708 3260 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:46.106981 containerd[1952]: time="2025-05-17T00:23:46.106959986Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:46.107212 containerd[1952]: time="2025-05-17T00:23:46.107191620Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4441619" May 17 00:23:46.107562 containerd[1952]: time="2025-05-17T00:23:46.107550668Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:46.108464 containerd[1952]: time="2025-05-17T00:23:46.108450035Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:46.108877 containerd[1952]: time="2025-05-17T00:23:46.108843910Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 1.470882958s" May 17 00:23:46.108877 containerd[1952]: time="2025-05-17T00:23:46.108858009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 17 00:23:46.109902 containerd[1952]: time="2025-05-17T00:23:46.109866093Z" level=info msg="CreateContainer within sandbox \"ee7928812c6864f287dacffe28954b4a716bb9a359b567e72ec86c2d3cc9e872\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 17 00:23:46.114424 containerd[1952]: time="2025-05-17T00:23:46.114409613Z" level=info msg="CreateContainer within sandbox \"ee7928812c6864f287dacffe28954b4a716bb9a359b567e72ec86c2d3cc9e872\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"71f6de4157a3d0124c02464756c0ff957f766fb24db70c693de23288e8d2d58a\"" May 17 00:23:46.114775 containerd[1952]: time="2025-05-17T00:23:46.114722330Z" level=info msg="StartContainer for \"71f6de4157a3d0124c02464756c0ff957f766fb24db70c693de23288e8d2d58a\"" May 17 00:23:46.164293 containerd[1952]: time="2025-05-17T00:23:46.164231507Z" level=info msg="StartContainer for \"71f6de4157a3d0124c02464756c0ff957f766fb24db70c693de23288e8d2d58a\" returns successfully" May 17 00:23:46.178881 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71f6de4157a3d0124c02464756c0ff957f766fb24db70c693de23288e8d2d58a-rootfs.mount: Deactivated successfully. May 17 00:23:46.463393 kubelet[3260]: I0517 00:23:46.463335 3260 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:23:47.404706 kubelet[3260]: E0517 00:23:47.404654 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hkm4q" podUID="403f708a-0226-4cfa-98fa-55326c364f55" May 17 00:23:47.497973 containerd[1952]: time="2025-05-17T00:23:47.497847496Z" level=error msg="collecting metrics for 71f6de4157a3d0124c02464756c0ff957f766fb24db70c693de23288e8d2d58a" error="cgroups: cgroup deleted: unknown" May 17 00:23:47.972544 containerd[1952]: time="2025-05-17T00:23:47.972492695Z" level=info msg="shim disconnected" id=71f6de4157a3d0124c02464756c0ff957f766fb24db70c693de23288e8d2d58a namespace=k8s.io May 17 00:23:47.972544 containerd[1952]: time="2025-05-17T00:23:47.972539842Z" level=warning msg="cleaning up after shim disconnected" id=71f6de4157a3d0124c02464756c0ff957f766fb24db70c693de23288e8d2d58a namespace=k8s.io May 17 00:23:47.972544 containerd[1952]: time="2025-05-17T00:23:47.972546330Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:23:48.475253 containerd[1952]: time="2025-05-17T00:23:48.475171608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 17 00:23:49.404955 kubelet[3260]: E0517 00:23:49.404864 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hkm4q" podUID="403f708a-0226-4cfa-98fa-55326c364f55" May 17 00:23:50.819032 containerd[1952]: time="2025-05-17T00:23:50.819010707Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:50.819241 containerd[1952]: time="2025-05-17T00:23:50.819211385Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 17 00:23:50.819587 containerd[1952]: time="2025-05-17T00:23:50.819575937Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:50.820550 containerd[1952]: time="2025-05-17T00:23:50.820539298Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:50.820930 containerd[1952]: time="2025-05-17T00:23:50.820918290Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 2.345674909s" May 17 00:23:50.820953 containerd[1952]: time="2025-05-17T00:23:50.820934111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 17 00:23:50.821911 containerd[1952]: time="2025-05-17T00:23:50.821900045Z" level=info msg="CreateContainer within sandbox \"ee7928812c6864f287dacffe28954b4a716bb9a359b567e72ec86c2d3cc9e872\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 17 00:23:50.826496 containerd[1952]: time="2025-05-17T00:23:50.826439479Z" level=info msg="CreateContainer within sandbox \"ee7928812c6864f287dacffe28954b4a716bb9a359b567e72ec86c2d3cc9e872\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1ff523bdc681339bdd5f35789d6b11682ea07d805605601b595f2de45c923a43\"" May 17 00:23:50.826703 containerd[1952]: time="2025-05-17T00:23:50.826690349Z" level=info msg="StartContainer for \"1ff523bdc681339bdd5f35789d6b11682ea07d805605601b595f2de45c923a43\"" May 17 00:23:50.876656 containerd[1952]: time="2025-05-17T00:23:50.876587232Z" level=info msg="StartContainer for \"1ff523bdc681339bdd5f35789d6b11682ea07d805605601b595f2de45c923a43\" returns successfully" May 17 00:23:51.403766 kubelet[3260]: E0517 00:23:51.403737 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hkm4q" podUID="403f708a-0226-4cfa-98fa-55326c364f55" May 17 00:23:51.435390 kubelet[3260]: I0517 00:23:51.435375 3260 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 17 00:23:51.437564 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ff523bdc681339bdd5f35789d6b11682ea07d805605601b595f2de45c923a43-rootfs.mount: Deactivated successfully. May 17 00:23:51.469692 kubelet[3260]: I0517 00:23:51.469665 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2bc1c7fb-90f8-4764-96f9-e3c9a5e522e6-calico-apiserver-certs\") pod \"calico-apiserver-69b4545c59-j694m\" (UID: \"2bc1c7fb-90f8-4764-96f9-e3c9a5e522e6\") " pod="calico-apiserver/calico-apiserver-69b4545c59-j694m" May 17 00:23:51.469808 kubelet[3260]: I0517 00:23:51.469793 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2q9l\" (UniqueName: \"kubernetes.io/projected/2bc1c7fb-90f8-4764-96f9-e3c9a5e522e6-kube-api-access-h2q9l\") pod \"calico-apiserver-69b4545c59-j694m\" (UID: \"2bc1c7fb-90f8-4764-96f9-e3c9a5e522e6\") " pod="calico-apiserver/calico-apiserver-69b4545c59-j694m" May 17 00:23:51.469837 kubelet[3260]: I0517 00:23:51.469821 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8ea4c5df-b413-4cdc-89c1-362994a46bb2-whisker-backend-key-pair\") pod \"whisker-54b6765ddc-b72d9\" (UID: \"8ea4c5df-b413-4cdc-89c1-362994a46bb2\") " pod="calico-system/whisker-54b6765ddc-b72d9" May 17 00:23:51.469866 kubelet[3260]: I0517 00:23:51.469847 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65mgz\" (UniqueName: \"kubernetes.io/projected/7c71ecf2-a673-4248-97b2-14f7ed1d6b2c-kube-api-access-65mgz\") pod \"calico-kube-controllers-5cb9df59f4-w67vm\" (UID: \"7c71ecf2-a673-4248-97b2-14f7ed1d6b2c\") " pod="calico-system/calico-kube-controllers-5cb9df59f4-w67vm" May 17 00:23:51.469892 kubelet[3260]: I0517 00:23:51.469873 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96kph\" (UniqueName: \"kubernetes.io/projected/ed62d733-94ab-45e2-84ca-07ad82b2634c-kube-api-access-96kph\") pod \"coredns-7c65d6cfc9-mm8pw\" (UID: \"ed62d733-94ab-45e2-84ca-07ad82b2634c\") " pod="kube-system/coredns-7c65d6cfc9-mm8pw" May 17 00:23:51.469924 kubelet[3260]: I0517 00:23:51.469897 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/307fc931-df0d-46ad-a0f5-9c642c960ef0-config\") pod \"goldmane-8f77d7b6c-jr8s4\" (UID: \"307fc931-df0d-46ad-a0f5-9c642c960ef0\") " pod="calico-system/goldmane-8f77d7b6c-jr8s4" May 17 00:23:51.469948 kubelet[3260]: I0517 00:23:51.469922 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6d8a08e2-6fb2-41c5-aa91-98552c67cdeb-calico-apiserver-certs\") pod \"calico-apiserver-69b4545c59-2hqxd\" (UID: \"6d8a08e2-6fb2-41c5-aa91-98552c67cdeb\") " pod="calico-apiserver/calico-apiserver-69b4545c59-2hqxd" May 17 00:23:51.469976 kubelet[3260]: I0517 00:23:51.469944 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c71ecf2-a673-4248-97b2-14f7ed1d6b2c-tigera-ca-bundle\") pod \"calico-kube-controllers-5cb9df59f4-w67vm\" (UID: \"7c71ecf2-a673-4248-97b2-14f7ed1d6b2c\") " pod="calico-system/calico-kube-controllers-5cb9df59f4-w67vm" May 17 00:23:51.469976 kubelet[3260]: I0517 00:23:51.469968 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq45m\" (UniqueName: \"kubernetes.io/projected/6d8a08e2-6fb2-41c5-aa91-98552c67cdeb-kube-api-access-nq45m\") pod \"calico-apiserver-69b4545c59-2hqxd\" (UID: \"6d8a08e2-6fb2-41c5-aa91-98552c67cdeb\") " pod="calico-apiserver/calico-apiserver-69b4545c59-2hqxd" May 17 00:23:51.470051 kubelet[3260]: I0517 00:23:51.469991 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/307fc931-df0d-46ad-a0f5-9c642c960ef0-goldmane-ca-bundle\") pod \"goldmane-8f77d7b6c-jr8s4\" (UID: \"307fc931-df0d-46ad-a0f5-9c642c960ef0\") " pod="calico-system/goldmane-8f77d7b6c-jr8s4" May 17 00:23:51.470092 kubelet[3260]: I0517 00:23:51.470047 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5181617e-229c-477a-82dd-f49d74685250-config-volume\") pod \"coredns-7c65d6cfc9-kmxkn\" (UID: \"5181617e-229c-477a-82dd-f49d74685250\") " pod="kube-system/coredns-7c65d6cfc9-kmxkn" May 17 00:23:51.470092 kubelet[3260]: I0517 00:23:51.470083 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/307fc931-df0d-46ad-a0f5-9c642c960ef0-goldmane-key-pair\") pod \"goldmane-8f77d7b6c-jr8s4\" (UID: \"307fc931-df0d-46ad-a0f5-9c642c960ef0\") " pod="calico-system/goldmane-8f77d7b6c-jr8s4" May 17 00:23:51.470169 kubelet[3260]: I0517 00:23:51.470112 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h4nt\" (UniqueName: \"kubernetes.io/projected/307fc931-df0d-46ad-a0f5-9c642c960ef0-kube-api-access-9h4nt\") pod \"goldmane-8f77d7b6c-jr8s4\" (UID: \"307fc931-df0d-46ad-a0f5-9c642c960ef0\") " pod="calico-system/goldmane-8f77d7b6c-jr8s4" May 17 00:23:51.470169 kubelet[3260]: I0517 00:23:51.470137 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf586\" (UniqueName: \"kubernetes.io/projected/8ea4c5df-b413-4cdc-89c1-362994a46bb2-kube-api-access-pf586\") pod \"whisker-54b6765ddc-b72d9\" (UID: \"8ea4c5df-b413-4cdc-89c1-362994a46bb2\") " pod="calico-system/whisker-54b6765ddc-b72d9" May 17 00:23:51.470169 kubelet[3260]: I0517 00:23:51.470165 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r6hz\" (UniqueName: \"kubernetes.io/projected/5181617e-229c-477a-82dd-f49d74685250-kube-api-access-9r6hz\") pod \"coredns-7c65d6cfc9-kmxkn\" (UID: \"5181617e-229c-477a-82dd-f49d74685250\") " pod="kube-system/coredns-7c65d6cfc9-kmxkn" May 17 00:23:51.470282 kubelet[3260]: I0517 00:23:51.470192 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ea4c5df-b413-4cdc-89c1-362994a46bb2-whisker-ca-bundle\") pod \"whisker-54b6765ddc-b72d9\" (UID: \"8ea4c5df-b413-4cdc-89c1-362994a46bb2\") " pod="calico-system/whisker-54b6765ddc-b72d9" May 17 00:23:51.470282 kubelet[3260]: I0517 00:23:51.470218 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed62d733-94ab-45e2-84ca-07ad82b2634c-config-volume\") pod \"coredns-7c65d6cfc9-mm8pw\" (UID: \"ed62d733-94ab-45e2-84ca-07ad82b2634c\") " pod="kube-system/coredns-7c65d6cfc9-mm8pw" May 17 00:23:51.749355 containerd[1952]: time="2025-05-17T00:23:51.749271091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b4545c59-j694m,Uid:2bc1c7fb-90f8-4764-96f9-e3c9a5e522e6,Namespace:calico-apiserver,Attempt:0,}" May 17 00:23:51.749779 containerd[1952]: time="2025-05-17T00:23:51.749271195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kmxkn,Uid:5181617e-229c-477a-82dd-f49d74685250,Namespace:kube-system,Attempt:0,}" May 17 00:23:51.749779 containerd[1952]: time="2025-05-17T00:23:51.749578085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b4545c59-2hqxd,Uid:6d8a08e2-6fb2-41c5-aa91-98552c67cdeb,Namespace:calico-apiserver,Attempt:0,}" May 17 00:23:51.750092 containerd[1952]: time="2025-05-17T00:23:51.749588294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-jr8s4,Uid:307fc931-df0d-46ad-a0f5-9c642c960ef0,Namespace:calico-system,Attempt:0,}" May 17 00:23:51.750570 containerd[1952]: time="2025-05-17T00:23:51.750484269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54b6765ddc-b72d9,Uid:8ea4c5df-b413-4cdc-89c1-362994a46bb2,Namespace:calico-system,Attempt:0,}" May 17 00:23:51.750850 containerd[1952]: time="2025-05-17T00:23:51.750776310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mm8pw,Uid:ed62d733-94ab-45e2-84ca-07ad82b2634c,Namespace:kube-system,Attempt:0,}" May 17 00:23:51.751255 containerd[1952]: time="2025-05-17T00:23:51.751190734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cb9df59f4-w67vm,Uid:7c71ecf2-a673-4248-97b2-14f7ed1d6b2c,Namespace:calico-system,Attempt:0,}" May 17 00:23:51.808985 containerd[1952]: time="2025-05-17T00:23:51.808937600Z" level=info msg="shim disconnected" id=1ff523bdc681339bdd5f35789d6b11682ea07d805605601b595f2de45c923a43 namespace=k8s.io May 17 00:23:51.808985 containerd[1952]: time="2025-05-17T00:23:51.808979064Z" level=warning msg="cleaning up after shim disconnected" id=1ff523bdc681339bdd5f35789d6b11682ea07d805605601b595f2de45c923a43 namespace=k8s.io May 17 00:23:51.808985 containerd[1952]: time="2025-05-17T00:23:51.808983952Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:23:51.851846 containerd[1952]: time="2025-05-17T00:23:51.851804177Z" level=error msg="Failed to destroy network for sandbox \"35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:51.852187 containerd[1952]: time="2025-05-17T00:23:51.852066617Z" level=error msg="encountered an error cleaning up failed sandbox \"35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:51.852187 containerd[1952]: time="2025-05-17T00:23:51.852096003Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-jr8s4,Uid:307fc931-df0d-46ad-a0f5-9c642c960ef0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:51.852296 containerd[1952]: time="2025-05-17T00:23:51.852213358Z" level=error msg="Failed to destroy network for sandbox \"095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:51.852331 kubelet[3260]: E0517 00:23:51.852247 3260 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:51.852331 kubelet[3260]: E0517 00:23:51.852311 3260 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-jr8s4" May 17 00:23:51.852428 kubelet[3260]: E0517 00:23:51.852328 3260 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-jr8s4" May 17 00:23:51.852428 kubelet[3260]: E0517 00:23:51.852361 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-8f77d7b6c-jr8s4_calico-system(307fc931-df0d-46ad-a0f5-9c642c960ef0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-8f77d7b6c-jr8s4_calico-system(307fc931-df0d-46ad-a0f5-9c642c960ef0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-jr8s4" podUID="307fc931-df0d-46ad-a0f5-9c642c960ef0" May 17 00:23:51.852530 containerd[1952]: time="2025-05-17T00:23:51.852380747Z" level=error msg="encountered an error cleaning up failed sandbox \"095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:51.852530 containerd[1952]: time="2025-05-17T00:23:51.852403309Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b4545c59-j694m,Uid:2bc1c7fb-90f8-4764-96f9-e3c9a5e522e6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:51.852611 kubelet[3260]: E0517 00:23:51.852490 3260 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:51.852611 kubelet[3260]: E0517 00:23:51.852599 3260 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69b4545c59-j694m" May 17 00:23:51.852677 kubelet[3260]: E0517 00:23:51.852611 3260 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69b4545c59-j694m" May 17 00:23:51.852677 kubelet[3260]: E0517 00:23:51.852634 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69b4545c59-j694m_calico-apiserver(2bc1c7fb-90f8-4764-96f9-e3c9a5e522e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69b4545c59-j694m_calico-apiserver(2bc1c7fb-90f8-4764-96f9-e3c9a5e522e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69b4545c59-j694m" podUID="2bc1c7fb-90f8-4764-96f9-e3c9a5e522e6" May 17 00:23:51.852945 containerd[1952]: time="2025-05-17T00:23:51.852926469Z" level=error msg="Failed to destroy network for sandbox \"215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:51.853115 containerd[1952]: time="2025-05-17T00:23:51.853101625Z" level=error msg="encountered an error cleaning up failed sandbox \"215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:51.853141 containerd[1952]: time="2025-05-17T00:23:51.853127622Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54b6765ddc-b72d9,Uid:8ea4c5df-b413-4cdc-89c1-362994a46bb2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:51.853174 containerd[1952]: time="2025-05-17T00:23:51.853110161Z" level=error msg="Failed to destroy network for sandbox \"fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:51.853241 kubelet[3260]: E0517 00:23:51.853227 3260 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:51.853267 kubelet[3260]: E0517 00:23:51.853251 3260 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54b6765ddc-b72d9" May 17 00:23:51.853267 kubelet[3260]: E0517 00:23:51.853262 3260 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54b6765ddc-b72d9" May 17 00:23:51.853307 kubelet[3260]: E0517 00:23:51.853285 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-54b6765ddc-b72d9_calico-system(8ea4c5df-b413-4cdc-89c1-362994a46bb2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-54b6765ddc-b72d9_calico-system(8ea4c5df-b413-4cdc-89c1-362994a46bb2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-54b6765ddc-b72d9" podUID="8ea4c5df-b413-4cdc-89c1-362994a46bb2" May 17 00:23:51.853339 containerd[1952]: time="2025-05-17T00:23:51.853319885Z" level=error msg="encountered an error cleaning up failed sandbox \"fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:51.853360 containerd[1952]: time="2025-05-17T00:23:51.853340093Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kmxkn,Uid:5181617e-229c-477a-82dd-f49d74685250,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:51.853416 kubelet[3260]: E0517 00:23:51.853403 3260 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:51.853437 kubelet[3260]: E0517 00:23:51.853423 3260 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-kmxkn" May 17 00:23:51.853456 kubelet[3260]: E0517 00:23:51.853433 3260 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-kmxkn" May 17 00:23:51.853475 kubelet[3260]: E0517 00:23:51.853450 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-kmxkn_kube-system(5181617e-229c-477a-82dd-f49d74685250)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-kmxkn_kube-system(5181617e-229c-477a-82dd-f49d74685250)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-kmxkn" podUID="5181617e-229c-477a-82dd-f49d74685250" May 17 00:23:51.853845 containerd[1952]: time="2025-05-17T00:23:51.853832241Z" level=error msg="Failed to destroy network for sandbox \"ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:51.853988 containerd[1952]: time="2025-05-17T00:23:51.853976035Z" level=error msg="encountered an error cleaning up failed sandbox \"ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:51.854013 containerd[1952]: time="2025-05-17T00:23:51.853998294Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b4545c59-2hqxd,Uid:6d8a08e2-6fb2-41c5-aa91-98552c67cdeb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:51.854082 kubelet[3260]: E0517 00:23:51.854054 3260 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:51.854104 kubelet[3260]: E0517 00:23:51.854091 3260 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69b4545c59-2hqxd" May 17 00:23:51.854127 kubelet[3260]: E0517 00:23:51.854101 3260 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69b4545c59-2hqxd" May 17 00:23:51.854127 kubelet[3260]: E0517 00:23:51.854118 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69b4545c59-2hqxd_calico-apiserver(6d8a08e2-6fb2-41c5-aa91-98552c67cdeb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69b4545c59-2hqxd_calico-apiserver(6d8a08e2-6fb2-41c5-aa91-98552c67cdeb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69b4545c59-2hqxd" podUID="6d8a08e2-6fb2-41c5-aa91-98552c67cdeb" May 17 00:23:51.854364 containerd[1952]: time="2025-05-17T00:23:51.854344103Z" level=error msg="Failed to destroy network for sandbox \"6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:51.854472 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85-shm.mount: Deactivated successfully. May 17 00:23:51.854600 containerd[1952]: time="2025-05-17T00:23:51.854516940Z" level=error msg="encountered an error cleaning up failed sandbox \"6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:51.854600 containerd[1952]: time="2025-05-17T00:23:51.854550523Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cb9df59f4-w67vm,Uid:7c71ecf2-a673-4248-97b2-14f7ed1d6b2c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:51.854600 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae-shm.mount: Deactivated successfully. May 17 00:23:51.854685 kubelet[3260]: E0517 00:23:51.854619 3260 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:51.854685 kubelet[3260]: E0517 00:23:51.854634 3260 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5cb9df59f4-w67vm" May 17 00:23:51.854685 kubelet[3260]: E0517 00:23:51.854643 3260 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5cb9df59f4-w67vm" May 17 00:23:51.854741 kubelet[3260]: E0517 00:23:51.854670 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5cb9df59f4-w67vm_calico-system(7c71ecf2-a673-4248-97b2-14f7ed1d6b2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5cb9df59f4-w67vm_calico-system(7c71ecf2-a673-4248-97b2-14f7ed1d6b2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5cb9df59f4-w67vm" podUID="7c71ecf2-a673-4248-97b2-14f7ed1d6b2c" May 17 00:23:51.855237 containerd[1952]: time="2025-05-17T00:23:51.855219196Z" level=error msg="Failed to destroy network for sandbox \"55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:51.855413 containerd[1952]: time="2025-05-17T00:23:51.855397079Z" level=error msg="encountered an error cleaning up failed sandbox \"55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:51.855440 containerd[1952]: time="2025-05-17T00:23:51.855423811Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mm8pw,Uid:ed62d733-94ab-45e2-84ca-07ad82b2634c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:51.855510 kubelet[3260]: E0517 00:23:51.855495 3260 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:51.855542 kubelet[3260]: E0517 00:23:51.855514 3260 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mm8pw" May 17 00:23:51.855542 kubelet[3260]: E0517 00:23:51.855523 3260 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mm8pw" May 17 00:23:51.855582 kubelet[3260]: E0517 00:23:51.855540 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-mm8pw_kube-system(ed62d733-94ab-45e2-84ca-07ad82b2634c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-mm8pw_kube-system(ed62d733-94ab-45e2-84ca-07ad82b2634c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-mm8pw" podUID="ed62d733-94ab-45e2-84ca-07ad82b2634c" May 17 00:23:51.856722 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35-shm.mount: Deactivated successfully. May 17 00:23:51.856788 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e-shm.mount: Deactivated successfully. May 17 00:23:51.856842 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8-shm.mount: Deactivated successfully. May 17 00:23:51.856898 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651-shm.mount: Deactivated successfully. May 17 00:23:51.856950 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174-shm.mount: Deactivated successfully. May 17 00:23:52.487107 kubelet[3260]: I0517 00:23:52.487006 3260 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e" May 17 00:23:52.488083 containerd[1952]: time="2025-05-17T00:23:52.487098364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 17 00:23:52.488247 containerd[1952]: time="2025-05-17T00:23:52.488232285Z" level=info msg="StopPodSandbox for \"55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e\"" May 17 00:23:52.488376 containerd[1952]: time="2025-05-17T00:23:52.488363629Z" level=info msg="Ensure that sandbox 55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e in task-service has been cleanup successfully" May 17 00:23:52.488417 kubelet[3260]: I0517 00:23:52.488409 3260 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8" May 17 00:23:52.488635 containerd[1952]: time="2025-05-17T00:23:52.488622759Z" level=info msg="StopPodSandbox for \"215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8\"" May 17 00:23:52.488734 containerd[1952]: time="2025-05-17T00:23:52.488720465Z" level=info msg="Ensure that sandbox 215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8 in task-service has been cleanup successfully" May 17 00:23:52.488843 kubelet[3260]: I0517 00:23:52.488834 3260 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85" May 17 00:23:52.489122 containerd[1952]: time="2025-05-17T00:23:52.489108180Z" level=info msg="StopPodSandbox for \"35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85\"" May 17 00:23:52.489224 containerd[1952]: time="2025-05-17T00:23:52.489210759Z" level=info msg="Ensure that sandbox 35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85 in task-service has been cleanup successfully" May 17 00:23:52.489305 kubelet[3260]: I0517 00:23:52.489294 3260 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651" May 17 00:23:52.489577 containerd[1952]: time="2025-05-17T00:23:52.489561647Z" level=info msg="StopPodSandbox for \"ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651\"" May 17 00:23:52.489680 containerd[1952]: time="2025-05-17T00:23:52.489667604Z" level=info msg="Ensure that sandbox ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651 in task-service has been cleanup successfully" May 17 00:23:52.489923 kubelet[3260]: I0517 00:23:52.489907 3260 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174" May 17 00:23:52.490355 containerd[1952]: time="2025-05-17T00:23:52.490333586Z" level=info msg="StopPodSandbox for \"fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174\"" May 17 00:23:52.490481 kubelet[3260]: I0517 00:23:52.490466 3260 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae" May 17 00:23:52.490539 containerd[1952]: time="2025-05-17T00:23:52.490468255Z" level=info msg="Ensure that sandbox fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174 in task-service has been cleanup successfully" May 17 00:23:52.490783 containerd[1952]: time="2025-05-17T00:23:52.490765576Z" level=info msg="StopPodSandbox for \"095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae\"" May 17 00:23:52.490926 containerd[1952]: time="2025-05-17T00:23:52.490906452Z" level=info msg="Ensure that sandbox 095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae in task-service has been cleanup successfully" May 17 00:23:52.491047 kubelet[3260]: I0517 00:23:52.491034 3260 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35" May 17 00:23:52.491565 containerd[1952]: time="2025-05-17T00:23:52.491535994Z" level=info msg="StopPodSandbox for \"6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35\"" May 17 00:23:52.491698 containerd[1952]: time="2025-05-17T00:23:52.491688331Z" level=info msg="Ensure that sandbox 6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35 in task-service has been cleanup successfully" May 17 00:23:52.504086 containerd[1952]: time="2025-05-17T00:23:52.503952193Z" level=error msg="StopPodSandbox for \"55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e\" failed" error="failed to destroy network for sandbox \"55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:52.504211 containerd[1952]: time="2025-05-17T00:23:52.504110144Z" level=error msg="StopPodSandbox for \"ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651\" failed" error="failed to destroy network for sandbox \"ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:52.504211 containerd[1952]: time="2025-05-17T00:23:52.504169438Z" level=error msg="StopPodSandbox for \"35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85\" failed" error="failed to destroy network for sandbox \"35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:52.504293 kubelet[3260]: E0517 00:23:52.504158 3260 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e" May 17 00:23:52.504293 kubelet[3260]: E0517 00:23:52.504210 3260 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651" May 17 00:23:52.504293 kubelet[3260]: E0517 00:23:52.504207 3260 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e"} May 17 00:23:52.504293 kubelet[3260]: E0517 00:23:52.504271 3260 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85" May 17 00:23:52.504398 kubelet[3260]: E0517 00:23:52.504276 3260 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ed62d733-94ab-45e2-84ca-07ad82b2634c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:23:52.504398 kubelet[3260]: E0517 00:23:52.504285 3260 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85"} May 17 00:23:52.504398 kubelet[3260]: E0517 00:23:52.504298 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ed62d733-94ab-45e2-84ca-07ad82b2634c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-mm8pw" podUID="ed62d733-94ab-45e2-84ca-07ad82b2634c" May 17 00:23:52.504398 kubelet[3260]: E0517 00:23:52.504309 3260 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"307fc931-df0d-46ad-a0f5-9c642c960ef0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:23:52.504398 kubelet[3260]: E0517 00:23:52.504236 3260 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651"} May 17 00:23:52.504554 kubelet[3260]: E0517 00:23:52.504327 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"307fc931-df0d-46ad-a0f5-9c642c960ef0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-jr8s4" podUID="307fc931-df0d-46ad-a0f5-9c642c960ef0" May 17 00:23:52.504554 kubelet[3260]: E0517 00:23:52.504334 3260 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6d8a08e2-6fb2-41c5-aa91-98552c67cdeb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:23:52.504554 kubelet[3260]: E0517 00:23:52.504372 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6d8a08e2-6fb2-41c5-aa91-98552c67cdeb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69b4545c59-2hqxd" podUID="6d8a08e2-6fb2-41c5-aa91-98552c67cdeb" May 17 00:23:52.505861 containerd[1952]: time="2025-05-17T00:23:52.505840879Z" level=error msg="StopPodSandbox for \"215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8\" failed" error="failed to destroy network for sandbox \"215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:52.505947 kubelet[3260]: E0517 00:23:52.505931 3260 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8" May 17 00:23:52.505979 kubelet[3260]: E0517 00:23:52.505952 3260 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8"} May 17 00:23:52.505979 kubelet[3260]: E0517 00:23:52.505973 3260 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8ea4c5df-b413-4cdc-89c1-362994a46bb2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:23:52.506033 kubelet[3260]: E0517 00:23:52.505987 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8ea4c5df-b413-4cdc-89c1-362994a46bb2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-54b6765ddc-b72d9" podUID="8ea4c5df-b413-4cdc-89c1-362994a46bb2" May 17 00:23:52.507116 containerd[1952]: time="2025-05-17T00:23:52.507093648Z" level=error msg="StopPodSandbox for \"fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174\" failed" error="failed to destroy network for sandbox \"fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:52.507185 containerd[1952]: time="2025-05-17T00:23:52.507167726Z" level=error msg="StopPodSandbox for \"6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35\" failed" error="failed to destroy network for sandbox \"6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:52.507209 kubelet[3260]: E0517 00:23:52.507197 3260 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174" May 17 00:23:52.507243 kubelet[3260]: E0517 00:23:52.507215 3260 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174"} May 17 00:23:52.507243 kubelet[3260]: E0517 00:23:52.507230 3260 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5181617e-229c-477a-82dd-f49d74685250\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:23:52.507292 kubelet[3260]: E0517 00:23:52.507237 3260 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35" May 17 00:23:52.507292 kubelet[3260]: E0517 00:23:52.507254 3260 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35"} May 17 00:23:52.507292 kubelet[3260]: E0517 00:23:52.507268 3260 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7c71ecf2-a673-4248-97b2-14f7ed1d6b2c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:23:52.507292 kubelet[3260]: E0517 00:23:52.507277 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7c71ecf2-a673-4248-97b2-14f7ed1d6b2c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5cb9df59f4-w67vm" podUID="7c71ecf2-a673-4248-97b2-14f7ed1d6b2c" May 17 00:23:52.507398 kubelet[3260]: E0517 00:23:52.507243 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5181617e-229c-477a-82dd-f49d74685250\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-kmxkn" podUID="5181617e-229c-477a-82dd-f49d74685250" May 17 00:23:52.507398 kubelet[3260]: E0517 00:23:52.507341 3260 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae" May 17 00:23:52.507398 kubelet[3260]: E0517 00:23:52.507351 3260 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae"} May 17 00:23:52.507398 kubelet[3260]: E0517 00:23:52.507362 3260 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2bc1c7fb-90f8-4764-96f9-e3c9a5e522e6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:23:52.507487 containerd[1952]: time="2025-05-17T00:23:52.507278896Z" level=error msg="StopPodSandbox for \"095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae\" failed" error="failed to destroy network for sandbox \"095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:52.507511 kubelet[3260]: E0517 00:23:52.507371 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2bc1c7fb-90f8-4764-96f9-e3c9a5e522e6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69b4545c59-j694m" podUID="2bc1c7fb-90f8-4764-96f9-e3c9a5e522e6" May 17 00:23:53.409463 containerd[1952]: time="2025-05-17T00:23:53.409418509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hkm4q,Uid:403f708a-0226-4cfa-98fa-55326c364f55,Namespace:calico-system,Attempt:0,}" May 17 00:23:53.436402 containerd[1952]: time="2025-05-17T00:23:53.436350362Z" level=error msg="Failed to destroy network for sandbox \"0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:53.436625 containerd[1952]: time="2025-05-17T00:23:53.436579065Z" level=error msg="encountered an error cleaning up failed sandbox \"0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:53.436664 containerd[1952]: time="2025-05-17T00:23:53.436617756Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hkm4q,Uid:403f708a-0226-4cfa-98fa-55326c364f55,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:53.436808 kubelet[3260]: E0517 00:23:53.436755 3260 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:53.436808 kubelet[3260]: E0517 00:23:53.436796 3260 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hkm4q" May 17 00:23:53.436874 kubelet[3260]: E0517 00:23:53.436809 3260 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hkm4q" May 17 00:23:53.436874 kubelet[3260]: E0517 00:23:53.436836 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hkm4q_calico-system(403f708a-0226-4cfa-98fa-55326c364f55)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hkm4q_calico-system(403f708a-0226-4cfa-98fa-55326c364f55)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hkm4q" podUID="403f708a-0226-4cfa-98fa-55326c364f55" May 17 00:23:53.438127 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed-shm.mount: Deactivated successfully. May 17 00:23:53.493053 kubelet[3260]: I0517 00:23:53.493036 3260 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed" May 17 00:23:53.493365 containerd[1952]: time="2025-05-17T00:23:53.493350536Z" level=info msg="StopPodSandbox for \"0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed\"" May 17 00:23:53.493463 containerd[1952]: time="2025-05-17T00:23:53.493453573Z" level=info msg="Ensure that sandbox 0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed in task-service has been cleanup successfully" May 17 00:23:53.507172 containerd[1952]: time="2025-05-17T00:23:53.507129747Z" level=error msg="StopPodSandbox for \"0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed\" failed" error="failed to destroy network for sandbox \"0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:23:53.507352 kubelet[3260]: E0517 00:23:53.507321 3260 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed" May 17 00:23:53.507416 kubelet[3260]: E0517 00:23:53.507365 3260 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed"} May 17 00:23:53.507416 kubelet[3260]: E0517 00:23:53.507407 3260 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"403f708a-0226-4cfa-98fa-55326c364f55\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:23:53.507522 kubelet[3260]: E0517 00:23:53.507433 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"403f708a-0226-4cfa-98fa-55326c364f55\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hkm4q" podUID="403f708a-0226-4cfa-98fa-55326c364f55" May 17 00:23:55.793525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3954493108.mount: Deactivated successfully. May 17 00:23:55.810561 containerd[1952]: time="2025-05-17T00:23:55.810540574Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:55.810774 containerd[1952]: time="2025-05-17T00:23:55.810736838Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 17 00:23:55.811123 containerd[1952]: time="2025-05-17T00:23:55.811109767Z" level=info msg="ImageCreate event name:\"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:55.812035 containerd[1952]: time="2025-05-17T00:23:55.812018456Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:55.812655 containerd[1952]: time="2025-05-17T00:23:55.812643502Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"156396234\" in 3.325463494s" May 17 00:23:55.812680 containerd[1952]: time="2025-05-17T00:23:55.812658036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 17 00:23:55.815970 containerd[1952]: time="2025-05-17T00:23:55.815950878Z" level=info msg="CreateContainer within sandbox \"ee7928812c6864f287dacffe28954b4a716bb9a359b567e72ec86c2d3cc9e872\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 17 00:23:55.820456 containerd[1952]: time="2025-05-17T00:23:55.820439438Z" level=info msg="CreateContainer within sandbox \"ee7928812c6864f287dacffe28954b4a716bb9a359b567e72ec86c2d3cc9e872\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"03864acbc6e25e71a674e4323ae1579428046ca4fc59383d2b6078cf3ed05f6b\"" May 17 00:23:55.820698 containerd[1952]: time="2025-05-17T00:23:55.820682447Z" level=info msg="StartContainer for \"03864acbc6e25e71a674e4323ae1579428046ca4fc59383d2b6078cf3ed05f6b\"" May 17 00:23:55.854268 containerd[1952]: time="2025-05-17T00:23:55.854214526Z" level=info msg="StartContainer for \"03864acbc6e25e71a674e4323ae1579428046ca4fc59383d2b6078cf3ed05f6b\" returns successfully" May 17 00:23:55.925047 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 17 00:23:55.925101 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 17 00:23:55.958453 containerd[1952]: time="2025-05-17T00:23:55.958427294Z" level=info msg="StopPodSandbox for \"215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8\"" May 17 00:23:55.996170 containerd[1952]: 2025-05-17 00:23:55.979 [INFO][4821] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8" May 17 00:23:55.996170 containerd[1952]: 2025-05-17 00:23:55.979 [INFO][4821] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8" iface="eth0" netns="/var/run/netns/cni-c209d4ac-36c9-8513-8b55-70836a6da5d6" May 17 00:23:55.996170 containerd[1952]: 2025-05-17 00:23:55.979 [INFO][4821] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8" iface="eth0" netns="/var/run/netns/cni-c209d4ac-36c9-8513-8b55-70836a6da5d6" May 17 00:23:55.996170 containerd[1952]: 2025-05-17 00:23:55.979 [INFO][4821] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8" iface="eth0" netns="/var/run/netns/cni-c209d4ac-36c9-8513-8b55-70836a6da5d6" May 17 00:23:55.996170 containerd[1952]: 2025-05-17 00:23:55.980 [INFO][4821] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8" May 17 00:23:55.996170 containerd[1952]: 2025-05-17 00:23:55.980 [INFO][4821] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8" May 17 00:23:55.996170 containerd[1952]: 2025-05-17 00:23:55.989 [INFO][4849] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8" HandleID="k8s-pod-network.215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8" Workload="ci--4081.3.3--n--750554c5a6-k8s-whisker--54b6765ddc--b72d9-eth0" May 17 00:23:55.996170 containerd[1952]: 2025-05-17 00:23:55.989 [INFO][4849] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:23:55.996170 containerd[1952]: 2025-05-17 00:23:55.990 [INFO][4849] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:23:55.996170 containerd[1952]: 2025-05-17 00:23:55.993 [WARNING][4849] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8" HandleID="k8s-pod-network.215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8" Workload="ci--4081.3.3--n--750554c5a6-k8s-whisker--54b6765ddc--b72d9-eth0" May 17 00:23:55.996170 containerd[1952]: 2025-05-17 00:23:55.993 [INFO][4849] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8" HandleID="k8s-pod-network.215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8" Workload="ci--4081.3.3--n--750554c5a6-k8s-whisker--54b6765ddc--b72d9-eth0" May 17 00:23:55.996170 containerd[1952]: 2025-05-17 00:23:55.993 [INFO][4849] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:23:55.996170 containerd[1952]: 2025-05-17 00:23:55.995 [INFO][4821] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8" May 17 00:23:55.996460 containerd[1952]: time="2025-05-17T00:23:55.996224034Z" level=info msg="TearDown network for sandbox \"215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8\" successfully" May 17 00:23:55.996460 containerd[1952]: time="2025-05-17T00:23:55.996243043Z" level=info msg="StopPodSandbox for \"215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8\" returns successfully" May 17 00:23:56.198827 kubelet[3260]: I0517 00:23:56.198571 3260 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pf586\" (UniqueName: \"kubernetes.io/projected/8ea4c5df-b413-4cdc-89c1-362994a46bb2-kube-api-access-pf586\") pod \"8ea4c5df-b413-4cdc-89c1-362994a46bb2\" (UID: \"8ea4c5df-b413-4cdc-89c1-362994a46bb2\") " May 17 00:23:56.198827 kubelet[3260]: I0517 00:23:56.198698 3260 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8ea4c5df-b413-4cdc-89c1-362994a46bb2-whisker-backend-key-pair\") pod \"8ea4c5df-b413-4cdc-89c1-362994a46bb2\" (UID: \"8ea4c5df-b413-4cdc-89c1-362994a46bb2\") " May 17 00:23:56.198827 kubelet[3260]: I0517 00:23:56.198756 3260 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ea4c5df-b413-4cdc-89c1-362994a46bb2-whisker-ca-bundle\") pod \"8ea4c5df-b413-4cdc-89c1-362994a46bb2\" (UID: \"8ea4c5df-b413-4cdc-89c1-362994a46bb2\") " May 17 00:23:56.200310 kubelet[3260]: I0517 00:23:56.199691 3260 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ea4c5df-b413-4cdc-89c1-362994a46bb2-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "8ea4c5df-b413-4cdc-89c1-362994a46bb2" (UID: "8ea4c5df-b413-4cdc-89c1-362994a46bb2"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 00:23:56.204691 kubelet[3260]: I0517 00:23:56.204596 3260 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ea4c5df-b413-4cdc-89c1-362994a46bb2-kube-api-access-pf586" (OuterVolumeSpecName: "kube-api-access-pf586") pod "8ea4c5df-b413-4cdc-89c1-362994a46bb2" (UID: "8ea4c5df-b413-4cdc-89c1-362994a46bb2"). InnerVolumeSpecName "kube-api-access-pf586". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:23:56.204963 kubelet[3260]: I0517 00:23:56.204895 3260 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ea4c5df-b413-4cdc-89c1-362994a46bb2-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "8ea4c5df-b413-4cdc-89c1-362994a46bb2" (UID: "8ea4c5df-b413-4cdc-89c1-362994a46bb2"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:23:56.300004 kubelet[3260]: I0517 00:23:56.299909 3260 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8ea4c5df-b413-4cdc-89c1-362994a46bb2-whisker-backend-key-pair\") on node \"ci-4081.3.3-n-750554c5a6\" DevicePath \"\"" May 17 00:23:56.300004 kubelet[3260]: I0517 00:23:56.300007 3260 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ea4c5df-b413-4cdc-89c1-362994a46bb2-whisker-ca-bundle\") on node \"ci-4081.3.3-n-750554c5a6\" DevicePath \"\"" May 17 00:23:56.300364 kubelet[3260]: I0517 00:23:56.300067 3260 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pf586\" (UniqueName: \"kubernetes.io/projected/8ea4c5df-b413-4cdc-89c1-362994a46bb2-kube-api-access-pf586\") on node \"ci-4081.3.3-n-750554c5a6\" DevicePath \"\"" May 17 00:23:56.519591 kubelet[3260]: I0517 00:23:56.519546 3260 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5gqjp" podStartSLOduration=1.917592215 podStartE2EDuration="14.519529606s" podCreationTimestamp="2025-05-17 00:23:42 +0000 UTC" firstStartedPulling="2025-05-17 00:23:43.211013811 +0000 UTC m=+15.879986345" lastFinishedPulling="2025-05-17 00:23:55.812951202 +0000 UTC m=+28.481923736" observedRunningTime="2025-05-17 00:23:56.51946251 +0000 UTC m=+29.188435044" watchObservedRunningTime="2025-05-17 00:23:56.519529606 +0000 UTC m=+29.188502137" May 17 00:23:56.702987 kubelet[3260]: I0517 00:23:56.702901 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sts8\" (UniqueName: \"kubernetes.io/projected/d4055bd2-75b3-4d87-b331-17976496ef74-kube-api-access-5sts8\") pod \"whisker-dfcb87d55-nzxqv\" (UID: \"d4055bd2-75b3-4d87-b331-17976496ef74\") " pod="calico-system/whisker-dfcb87d55-nzxqv" May 17 00:23:56.703235 kubelet[3260]: I0517 00:23:56.703025 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4055bd2-75b3-4d87-b331-17976496ef74-whisker-ca-bundle\") pod \"whisker-dfcb87d55-nzxqv\" (UID: \"d4055bd2-75b3-4d87-b331-17976496ef74\") " pod="calico-system/whisker-dfcb87d55-nzxqv" May 17 00:23:56.703235 kubelet[3260]: I0517 00:23:56.703166 3260 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d4055bd2-75b3-4d87-b331-17976496ef74-whisker-backend-key-pair\") pod \"whisker-dfcb87d55-nzxqv\" (UID: \"d4055bd2-75b3-4d87-b331-17976496ef74\") " pod="calico-system/whisker-dfcb87d55-nzxqv" May 17 00:23:56.798558 systemd[1]: run-netns-cni\x2dc209d4ac\x2d36c9\x2d8513\x2d8b55\x2d70836a6da5d6.mount: Deactivated successfully. May 17 00:23:56.798654 systemd[1]: var-lib-kubelet-pods-8ea4c5df\x2db413\x2d4cdc\x2d89c1\x2d362994a46bb2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpf586.mount: Deactivated successfully. May 17 00:23:56.798717 systemd[1]: var-lib-kubelet-pods-8ea4c5df\x2db413\x2d4cdc\x2d89c1\x2d362994a46bb2-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 17 00:23:56.842858 containerd[1952]: time="2025-05-17T00:23:56.842820863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-dfcb87d55-nzxqv,Uid:d4055bd2-75b3-4d87-b331-17976496ef74,Namespace:calico-system,Attempt:0,}" May 17 00:23:56.910563 systemd-networkd[1566]: calic9dbf773377: Link UP May 17 00:23:56.910846 systemd-networkd[1566]: calic9dbf773377: Gained carrier May 17 00:23:56.920109 containerd[1952]: 2025-05-17 00:23:56.857 [INFO][4914] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:23:56.920109 containerd[1952]: 2025-05-17 00:23:56.865 [INFO][4914] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--750554c5a6-k8s-whisker--dfcb87d55--nzxqv-eth0 whisker-dfcb87d55- calico-system d4055bd2-75b3-4d87-b331-17976496ef74 887 0 2025-05-17 00:23:56 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:dfcb87d55 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.3-n-750554c5a6 whisker-dfcb87d55-nzxqv eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic9dbf773377 [] [] }} ContainerID="343a33756b5ee9db48bb094e92a698bd28ad1f5f190ee58025ddaa12598df6b4" Namespace="calico-system" Pod="whisker-dfcb87d55-nzxqv" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-whisker--dfcb87d55--nzxqv-" May 17 00:23:56.920109 containerd[1952]: 2025-05-17 00:23:56.865 [INFO][4914] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="343a33756b5ee9db48bb094e92a698bd28ad1f5f190ee58025ddaa12598df6b4" Namespace="calico-system" Pod="whisker-dfcb87d55-nzxqv" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-whisker--dfcb87d55--nzxqv-eth0" May 17 00:23:56.920109 containerd[1952]: 2025-05-17 00:23:56.880 [INFO][4938] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="343a33756b5ee9db48bb094e92a698bd28ad1f5f190ee58025ddaa12598df6b4" HandleID="k8s-pod-network.343a33756b5ee9db48bb094e92a698bd28ad1f5f190ee58025ddaa12598df6b4" Workload="ci--4081.3.3--n--750554c5a6-k8s-whisker--dfcb87d55--nzxqv-eth0" May 17 00:23:56.920109 containerd[1952]: 2025-05-17 00:23:56.880 [INFO][4938] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="343a33756b5ee9db48bb094e92a698bd28ad1f5f190ee58025ddaa12598df6b4" HandleID="k8s-pod-network.343a33756b5ee9db48bb094e92a698bd28ad1f5f190ee58025ddaa12598df6b4" Workload="ci--4081.3.3--n--750554c5a6-k8s-whisker--dfcb87d55--nzxqv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e6a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-n-750554c5a6", "pod":"whisker-dfcb87d55-nzxqv", "timestamp":"2025-05-17 00:23:56.880337587 +0000 UTC"}, Hostname:"ci-4081.3.3-n-750554c5a6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:23:56.920109 containerd[1952]: 2025-05-17 00:23:56.880 [INFO][4938] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:23:56.920109 containerd[1952]: 2025-05-17 00:23:56.880 [INFO][4938] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:23:56.920109 containerd[1952]: 2025-05-17 00:23:56.880 [INFO][4938] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-750554c5a6' May 17 00:23:56.920109 containerd[1952]: 2025-05-17 00:23:56.886 [INFO][4938] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.343a33756b5ee9db48bb094e92a698bd28ad1f5f190ee58025ddaa12598df6b4" host="ci-4081.3.3-n-750554c5a6" May 17 00:23:56.920109 containerd[1952]: 2025-05-17 00:23:56.889 [INFO][4938] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-750554c5a6" May 17 00:23:56.920109 containerd[1952]: 2025-05-17 00:23:56.892 [INFO][4938] ipam/ipam.go 511: Trying affinity for 192.168.62.128/26 host="ci-4081.3.3-n-750554c5a6" May 17 00:23:56.920109 containerd[1952]: 2025-05-17 00:23:56.893 [INFO][4938] ipam/ipam.go 158: Attempting to load block cidr=192.168.62.128/26 host="ci-4081.3.3-n-750554c5a6" May 17 00:23:56.920109 containerd[1952]: 2025-05-17 00:23:56.895 [INFO][4938] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.62.128/26 host="ci-4081.3.3-n-750554c5a6" May 17 00:23:56.920109 containerd[1952]: 2025-05-17 00:23:56.895 [INFO][4938] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.62.128/26 handle="k8s-pod-network.343a33756b5ee9db48bb094e92a698bd28ad1f5f190ee58025ddaa12598df6b4" host="ci-4081.3.3-n-750554c5a6" May 17 00:23:56.920109 containerd[1952]: 2025-05-17 00:23:56.896 [INFO][4938] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.343a33756b5ee9db48bb094e92a698bd28ad1f5f190ee58025ddaa12598df6b4 May 17 00:23:56.920109 containerd[1952]: 2025-05-17 00:23:56.899 [INFO][4938] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.62.128/26 handle="k8s-pod-network.343a33756b5ee9db48bb094e92a698bd28ad1f5f190ee58025ddaa12598df6b4" host="ci-4081.3.3-n-750554c5a6" May 17 00:23:56.920109 containerd[1952]: 2025-05-17 00:23:56.902 [INFO][4938] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.62.129/26] block=192.168.62.128/26 handle="k8s-pod-network.343a33756b5ee9db48bb094e92a698bd28ad1f5f190ee58025ddaa12598df6b4" host="ci-4081.3.3-n-750554c5a6" May 17 00:23:56.920109 containerd[1952]: 2025-05-17 00:23:56.902 [INFO][4938] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.62.129/26] handle="k8s-pod-network.343a33756b5ee9db48bb094e92a698bd28ad1f5f190ee58025ddaa12598df6b4" host="ci-4081.3.3-n-750554c5a6" May 17 00:23:56.920109 containerd[1952]: 2025-05-17 00:23:56.902 [INFO][4938] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:23:56.920109 containerd[1952]: 2025-05-17 00:23:56.903 [INFO][4938] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.129/26] IPv6=[] ContainerID="343a33756b5ee9db48bb094e92a698bd28ad1f5f190ee58025ddaa12598df6b4" HandleID="k8s-pod-network.343a33756b5ee9db48bb094e92a698bd28ad1f5f190ee58025ddaa12598df6b4" Workload="ci--4081.3.3--n--750554c5a6-k8s-whisker--dfcb87d55--nzxqv-eth0" May 17 00:23:56.920989 containerd[1952]: 2025-05-17 00:23:56.904 [INFO][4914] cni-plugin/k8s.go 418: Populated endpoint ContainerID="343a33756b5ee9db48bb094e92a698bd28ad1f5f190ee58025ddaa12598df6b4" Namespace="calico-system" Pod="whisker-dfcb87d55-nzxqv" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-whisker--dfcb87d55--nzxqv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--750554c5a6-k8s-whisker--dfcb87d55--nzxqv-eth0", GenerateName:"whisker-dfcb87d55-", Namespace:"calico-system", SelfLink:"", UID:"d4055bd2-75b3-4d87-b331-17976496ef74", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"dfcb87d55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-750554c5a6", ContainerID:"", Pod:"whisker-dfcb87d55-nzxqv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.62.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic9dbf773377", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:23:56.920989 containerd[1952]: 2025-05-17 00:23:56.904 [INFO][4914] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.129/32] ContainerID="343a33756b5ee9db48bb094e92a698bd28ad1f5f190ee58025ddaa12598df6b4" Namespace="calico-system" Pod="whisker-dfcb87d55-nzxqv" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-whisker--dfcb87d55--nzxqv-eth0" May 17 00:23:56.920989 containerd[1952]: 2025-05-17 00:23:56.904 [INFO][4914] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic9dbf773377 ContainerID="343a33756b5ee9db48bb094e92a698bd28ad1f5f190ee58025ddaa12598df6b4" Namespace="calico-system" Pod="whisker-dfcb87d55-nzxqv" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-whisker--dfcb87d55--nzxqv-eth0" May 17 00:23:56.920989 containerd[1952]: 2025-05-17 00:23:56.910 [INFO][4914] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="343a33756b5ee9db48bb094e92a698bd28ad1f5f190ee58025ddaa12598df6b4" Namespace="calico-system" Pod="whisker-dfcb87d55-nzxqv" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-whisker--dfcb87d55--nzxqv-eth0" May 17 00:23:56.920989 containerd[1952]: 2025-05-17 00:23:56.911 [INFO][4914] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="343a33756b5ee9db48bb094e92a698bd28ad1f5f190ee58025ddaa12598df6b4" Namespace="calico-system" Pod="whisker-dfcb87d55-nzxqv" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-whisker--dfcb87d55--nzxqv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--750554c5a6-k8s-whisker--dfcb87d55--nzxqv-eth0", GenerateName:"whisker-dfcb87d55-", Namespace:"calico-system", SelfLink:"", UID:"d4055bd2-75b3-4d87-b331-17976496ef74", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"dfcb87d55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-750554c5a6", ContainerID:"343a33756b5ee9db48bb094e92a698bd28ad1f5f190ee58025ddaa12598df6b4", Pod:"whisker-dfcb87d55-nzxqv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.62.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic9dbf773377", MAC:"e6:f7:67:2d:be:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:23:56.920989 containerd[1952]: 2025-05-17 00:23:56.918 [INFO][4914] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="343a33756b5ee9db48bb094e92a698bd28ad1f5f190ee58025ddaa12598df6b4" Namespace="calico-system" Pod="whisker-dfcb87d55-nzxqv" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-whisker--dfcb87d55--nzxqv-eth0" May 17 00:23:56.930371 containerd[1952]: time="2025-05-17T00:23:56.930334923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:23:56.930371 containerd[1952]: time="2025-05-17T00:23:56.930359075Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:23:56.930371 containerd[1952]: time="2025-05-17T00:23:56.930365738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:56.930524 containerd[1952]: time="2025-05-17T00:23:56.930419505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:56.978285 containerd[1952]: time="2025-05-17T00:23:56.978255604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-dfcb87d55-nzxqv,Uid:d4055bd2-75b3-4d87-b331-17976496ef74,Namespace:calico-system,Attempt:0,} returns sandbox id \"343a33756b5ee9db48bb094e92a698bd28ad1f5f190ee58025ddaa12598df6b4\"" May 17 00:23:56.979125 containerd[1952]: time="2025-05-17T00:23:56.979107765Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:23:57.305208 containerd[1952]: time="2025-05-17T00:23:57.305045994Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:23:57.306161 containerd[1952]: time="2025-05-17T00:23:57.306074774Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:23:57.306207 containerd[1952]: time="2025-05-17T00:23:57.306166453Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:23:57.306341 kubelet[3260]: E0517 00:23:57.306286 3260 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:23:57.306655 kubelet[3260]: E0517 00:23:57.306359 3260 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:23:57.306684 kubelet[3260]: E0517 00:23:57.306449 3260 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:89e605ba6a4a488d920453165d66bb98,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5sts8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-dfcb87d55-nzxqv_calico-system(d4055bd2-75b3-4d87-b331-17976496ef74): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:23:57.309089 containerd[1952]: time="2025-05-17T00:23:57.309079187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:23:57.407046 kubelet[3260]: I0517 00:23:57.406995 3260 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ea4c5df-b413-4cdc-89c1-362994a46bb2" path="/var/lib/kubelet/pods/8ea4c5df-b413-4cdc-89c1-362994a46bb2/volumes" May 17 00:23:57.627890 containerd[1952]: time="2025-05-17T00:23:57.627639933Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:23:57.628498 containerd[1952]: time="2025-05-17T00:23:57.628439224Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:23:57.628565 containerd[1952]: time="2025-05-17T00:23:57.628519042Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:23:57.628704 kubelet[3260]: E0517 00:23:57.628653 3260 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:23:57.628704 kubelet[3260]: E0517 00:23:57.628682 3260 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:23:57.628806 kubelet[3260]: E0517 00:23:57.628751 3260 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5sts8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-dfcb87d55-nzxqv_calico-system(d4055bd2-75b3-4d87-b331-17976496ef74): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:23:57.630483 kubelet[3260]: E0517 00:23:57.630463 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-dfcb87d55-nzxqv" podUID="d4055bd2-75b3-4d87-b331-17976496ef74" May 17 00:23:58.169832 systemd-networkd[1566]: calic9dbf773377: Gained IPv6LL May 17 00:23:58.516109 kubelet[3260]: E0517 00:23:58.515990 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-dfcb87d55-nzxqv" podUID="d4055bd2-75b3-4d87-b331-17976496ef74" May 17 00:24:04.404873 containerd[1952]: time="2025-05-17T00:24:04.404835567Z" level=info msg="StopPodSandbox for \"0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed\"" May 17 00:24:04.405198 containerd[1952]: time="2025-05-17T00:24:04.404835861Z" level=info msg="StopPodSandbox for \"095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae\"" May 17 00:24:04.405198 containerd[1952]: time="2025-05-17T00:24:04.404835618Z" level=info msg="StopPodSandbox for \"fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174\"" May 17 00:24:04.448364 containerd[1952]: 2025-05-17 00:24:04.432 [INFO][5562] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed" May 17 00:24:04.448364 containerd[1952]: 2025-05-17 00:24:04.432 [INFO][5562] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed" iface="eth0" netns="/var/run/netns/cni-3b12f41c-1b22-78c3-28ea-48e4b4a0f7fa" May 17 00:24:04.448364 containerd[1952]: 2025-05-17 00:24:04.432 [INFO][5562] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed" iface="eth0" netns="/var/run/netns/cni-3b12f41c-1b22-78c3-28ea-48e4b4a0f7fa" May 17 00:24:04.448364 containerd[1952]: 2025-05-17 00:24:04.432 [INFO][5562] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed" iface="eth0" netns="/var/run/netns/cni-3b12f41c-1b22-78c3-28ea-48e4b4a0f7fa" May 17 00:24:04.448364 containerd[1952]: 2025-05-17 00:24:04.433 [INFO][5562] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed" May 17 00:24:04.448364 containerd[1952]: 2025-05-17 00:24:04.433 [INFO][5562] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed" May 17 00:24:04.448364 containerd[1952]: 2025-05-17 00:24:04.442 [INFO][5620] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed" HandleID="k8s-pod-network.0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed" Workload="ci--4081.3.3--n--750554c5a6-k8s-csi--node--driver--hkm4q-eth0" May 17 00:24:04.448364 containerd[1952]: 2025-05-17 00:24:04.442 [INFO][5620] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:04.448364 containerd[1952]: 2025-05-17 00:24:04.442 [INFO][5620] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:04.448364 containerd[1952]: 2025-05-17 00:24:04.446 [WARNING][5620] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed" HandleID="k8s-pod-network.0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed" Workload="ci--4081.3.3--n--750554c5a6-k8s-csi--node--driver--hkm4q-eth0" May 17 00:24:04.448364 containerd[1952]: 2025-05-17 00:24:04.446 [INFO][5620] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed" HandleID="k8s-pod-network.0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed" Workload="ci--4081.3.3--n--750554c5a6-k8s-csi--node--driver--hkm4q-eth0" May 17 00:24:04.448364 containerd[1952]: 2025-05-17 00:24:04.447 [INFO][5620] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:04.448364 containerd[1952]: 2025-05-17 00:24:04.447 [INFO][5562] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed" May 17 00:24:04.450065 systemd[1]: run-netns-cni\x2d3b12f41c\x2d1b22\x2d78c3\x2d28ea\x2d48e4b4a0f7fa.mount: Deactivated successfully. May 17 00:24:04.450695 containerd[1952]: time="2025-05-17T00:24:04.450676487Z" level=info msg="TearDown network for sandbox \"0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed\" successfully" May 17 00:24:04.450725 containerd[1952]: time="2025-05-17T00:24:04.450696784Z" level=info msg="StopPodSandbox for \"0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed\" returns successfully" May 17 00:24:04.451092 containerd[1952]: time="2025-05-17T00:24:04.451080207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hkm4q,Uid:403f708a-0226-4cfa-98fa-55326c364f55,Namespace:calico-system,Attempt:1,}" May 17 00:24:04.452438 containerd[1952]: 2025-05-17 00:24:04.431 [INFO][5563] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174" May 17 00:24:04.452438 containerd[1952]: 2025-05-17 00:24:04.431 [INFO][5563] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174" iface="eth0" netns="/var/run/netns/cni-08a80b24-6973-84c3-a032-59ab2f00c6fd" May 17 00:24:04.452438 containerd[1952]: 2025-05-17 00:24:04.431 [INFO][5563] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174" iface="eth0" netns="/var/run/netns/cni-08a80b24-6973-84c3-a032-59ab2f00c6fd" May 17 00:24:04.452438 containerd[1952]: 2025-05-17 00:24:04.431 [INFO][5563] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174" iface="eth0" netns="/var/run/netns/cni-08a80b24-6973-84c3-a032-59ab2f00c6fd" May 17 00:24:04.452438 containerd[1952]: 2025-05-17 00:24:04.431 [INFO][5563] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174" May 17 00:24:04.452438 containerd[1952]: 2025-05-17 00:24:04.431 [INFO][5563] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174" May 17 00:24:04.452438 containerd[1952]: 2025-05-17 00:24:04.443 [INFO][5613] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174" HandleID="k8s-pod-network.fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174" Workload="ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--kmxkn-eth0" May 17 00:24:04.452438 containerd[1952]: 2025-05-17 00:24:04.443 [INFO][5613] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:04.452438 containerd[1952]: 2025-05-17 00:24:04.447 [INFO][5613] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:04.452438 containerd[1952]: 2025-05-17 00:24:04.449 [WARNING][5613] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174" HandleID="k8s-pod-network.fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174" Workload="ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--kmxkn-eth0" May 17 00:24:04.452438 containerd[1952]: 2025-05-17 00:24:04.449 [INFO][5613] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174" HandleID="k8s-pod-network.fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174" Workload="ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--kmxkn-eth0" May 17 00:24:04.452438 containerd[1952]: 2025-05-17 00:24:04.451 [INFO][5613] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:04.452438 containerd[1952]: 2025-05-17 00:24:04.451 [INFO][5563] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174" May 17 00:24:04.452831 containerd[1952]: time="2025-05-17T00:24:04.452515454Z" level=info msg="TearDown network for sandbox \"fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174\" successfully" May 17 00:24:04.452831 containerd[1952]: time="2025-05-17T00:24:04.452529390Z" level=info msg="StopPodSandbox for \"fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174\" returns successfully" May 17 00:24:04.452831 containerd[1952]: time="2025-05-17T00:24:04.452788688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kmxkn,Uid:5181617e-229c-477a-82dd-f49d74685250,Namespace:kube-system,Attempt:1,}" May 17 00:24:04.454323 systemd[1]: run-netns-cni\x2d08a80b24\x2d6973\x2d84c3\x2da032\x2d59ab2f00c6fd.mount: Deactivated successfully. May 17 00:24:04.457659 containerd[1952]: 2025-05-17 00:24:04.432 [INFO][5564] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae" May 17 00:24:04.457659 containerd[1952]: 2025-05-17 00:24:04.432 [INFO][5564] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae" iface="eth0" netns="/var/run/netns/cni-2d3fc93c-720c-0f3e-359a-6f622f8c4891" May 17 00:24:04.457659 containerd[1952]: 2025-05-17 00:24:04.432 [INFO][5564] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae" iface="eth0" netns="/var/run/netns/cni-2d3fc93c-720c-0f3e-359a-6f622f8c4891" May 17 00:24:04.457659 containerd[1952]: 2025-05-17 00:24:04.432 [INFO][5564] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae" iface="eth0" netns="/var/run/netns/cni-2d3fc93c-720c-0f3e-359a-6f622f8c4891" May 17 00:24:04.457659 containerd[1952]: 2025-05-17 00:24:04.432 [INFO][5564] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae" May 17 00:24:04.457659 containerd[1952]: 2025-05-17 00:24:04.432 [INFO][5564] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae" May 17 00:24:04.457659 containerd[1952]: 2025-05-17 00:24:04.444 [INFO][5615] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae" HandleID="k8s-pod-network.095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae" Workload="ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--j694m-eth0" May 17 00:24:04.457659 containerd[1952]: 2025-05-17 00:24:04.444 [INFO][5615] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:04.457659 containerd[1952]: 2025-05-17 00:24:04.451 [INFO][5615] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:04.457659 containerd[1952]: 2025-05-17 00:24:04.454 [WARNING][5615] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae" HandleID="k8s-pod-network.095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae" Workload="ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--j694m-eth0" May 17 00:24:04.457659 containerd[1952]: 2025-05-17 00:24:04.454 [INFO][5615] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae" HandleID="k8s-pod-network.095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae" Workload="ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--j694m-eth0" May 17 00:24:04.457659 containerd[1952]: 2025-05-17 00:24:04.455 [INFO][5615] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:04.457659 containerd[1952]: 2025-05-17 00:24:04.456 [INFO][5564] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae" May 17 00:24:04.457994 containerd[1952]: time="2025-05-17T00:24:04.457739772Z" level=info msg="TearDown network for sandbox \"095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae\" successfully" May 17 00:24:04.457994 containerd[1952]: time="2025-05-17T00:24:04.457759874Z" level=info msg="StopPodSandbox for \"095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae\" returns successfully" May 17 00:24:04.458157 containerd[1952]: time="2025-05-17T00:24:04.458145260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b4545c59-j694m,Uid:2bc1c7fb-90f8-4764-96f9-e3c9a5e522e6,Namespace:calico-apiserver,Attempt:1,}" May 17 00:24:04.460224 systemd[1]: run-netns-cni\x2d2d3fc93c\x2d720c\x2d0f3e\x2d359a\x2d6f622f8c4891.mount: Deactivated successfully. May 17 00:24:04.520487 systemd-networkd[1566]: cali4a95f1c1ffc: Link UP May 17 00:24:04.520637 systemd-networkd[1566]: cali4a95f1c1ffc: Gained carrier May 17 00:24:04.526779 containerd[1952]: 2025-05-17 00:24:04.466 [INFO][5664] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:24:04.526779 containerd[1952]: 2025-05-17 00:24:04.473 [INFO][5664] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--750554c5a6-k8s-csi--node--driver--hkm4q-eth0 csi-node-driver- calico-system 403f708a-0226-4cfa-98fa-55326c364f55 927 0 2025-05-17 00:23:43 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:68bf44dd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.3-n-750554c5a6 csi-node-driver-hkm4q eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4a95f1c1ffc [] [] }} ContainerID="0438b3068f0e8e2a9c731a0c39118a21404578f395c9dd424bd6c367b1d06de6" Namespace="calico-system" Pod="csi-node-driver-hkm4q" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-csi--node--driver--hkm4q-" May 17 00:24:04.526779 containerd[1952]: 2025-05-17 00:24:04.473 [INFO][5664] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0438b3068f0e8e2a9c731a0c39118a21404578f395c9dd424bd6c367b1d06de6" Namespace="calico-system" Pod="csi-node-driver-hkm4q" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-csi--node--driver--hkm4q-eth0" May 17 00:24:04.526779 containerd[1952]: 2025-05-17 00:24:04.486 [INFO][5738] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0438b3068f0e8e2a9c731a0c39118a21404578f395c9dd424bd6c367b1d06de6" HandleID="k8s-pod-network.0438b3068f0e8e2a9c731a0c39118a21404578f395c9dd424bd6c367b1d06de6" Workload="ci--4081.3.3--n--750554c5a6-k8s-csi--node--driver--hkm4q-eth0" May 17 00:24:04.526779 containerd[1952]: 2025-05-17 00:24:04.486 [INFO][5738] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0438b3068f0e8e2a9c731a0c39118a21404578f395c9dd424bd6c367b1d06de6" HandleID="k8s-pod-network.0438b3068f0e8e2a9c731a0c39118a21404578f395c9dd424bd6c367b1d06de6" Workload="ci--4081.3.3--n--750554c5a6-k8s-csi--node--driver--hkm4q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f590), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-n-750554c5a6", "pod":"csi-node-driver-hkm4q", "timestamp":"2025-05-17 00:24:04.486664882 +0000 UTC"}, Hostname:"ci-4081.3.3-n-750554c5a6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:24:04.526779 containerd[1952]: 2025-05-17 00:24:04.486 [INFO][5738] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:04.526779 containerd[1952]: 2025-05-17 00:24:04.486 [INFO][5738] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:04.526779 containerd[1952]: 2025-05-17 00:24:04.486 [INFO][5738] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-750554c5a6' May 17 00:24:04.526779 containerd[1952]: 2025-05-17 00:24:04.490 [INFO][5738] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0438b3068f0e8e2a9c731a0c39118a21404578f395c9dd424bd6c367b1d06de6" host="ci-4081.3.3-n-750554c5a6" May 17 00:24:04.526779 containerd[1952]: 2025-05-17 00:24:04.494 [INFO][5738] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-750554c5a6" May 17 00:24:04.526779 containerd[1952]: 2025-05-17 00:24:04.497 [INFO][5738] ipam/ipam.go 511: Trying affinity for 192.168.62.128/26 host="ci-4081.3.3-n-750554c5a6" May 17 00:24:04.526779 containerd[1952]: 2025-05-17 00:24:04.498 [INFO][5738] ipam/ipam.go 158: Attempting to load block cidr=192.168.62.128/26 host="ci-4081.3.3-n-750554c5a6" May 17 00:24:04.526779 containerd[1952]: 2025-05-17 00:24:04.500 [INFO][5738] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.62.128/26 host="ci-4081.3.3-n-750554c5a6" May 17 00:24:04.526779 containerd[1952]: 2025-05-17 00:24:04.500 [INFO][5738] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.62.128/26 handle="k8s-pod-network.0438b3068f0e8e2a9c731a0c39118a21404578f395c9dd424bd6c367b1d06de6" host="ci-4081.3.3-n-750554c5a6" May 17 00:24:04.526779 containerd[1952]: 2025-05-17 00:24:04.501 [INFO][5738] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0438b3068f0e8e2a9c731a0c39118a21404578f395c9dd424bd6c367b1d06de6 May 17 00:24:04.526779 containerd[1952]: 2025-05-17 00:24:04.515 [INFO][5738] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.62.128/26 handle="k8s-pod-network.0438b3068f0e8e2a9c731a0c39118a21404578f395c9dd424bd6c367b1d06de6" host="ci-4081.3.3-n-750554c5a6" May 17 00:24:04.526779 containerd[1952]: 2025-05-17 00:24:04.518 [INFO][5738] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.62.130/26] block=192.168.62.128/26 handle="k8s-pod-network.0438b3068f0e8e2a9c731a0c39118a21404578f395c9dd424bd6c367b1d06de6" host="ci-4081.3.3-n-750554c5a6" May 17 00:24:04.526779 containerd[1952]: 2025-05-17 00:24:04.518 [INFO][5738] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.62.130/26] handle="k8s-pod-network.0438b3068f0e8e2a9c731a0c39118a21404578f395c9dd424bd6c367b1d06de6" host="ci-4081.3.3-n-750554c5a6" May 17 00:24:04.526779 containerd[1952]: 2025-05-17 00:24:04.518 [INFO][5738] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:04.526779 containerd[1952]: 2025-05-17 00:24:04.518 [INFO][5738] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.130/26] IPv6=[] ContainerID="0438b3068f0e8e2a9c731a0c39118a21404578f395c9dd424bd6c367b1d06de6" HandleID="k8s-pod-network.0438b3068f0e8e2a9c731a0c39118a21404578f395c9dd424bd6c367b1d06de6" Workload="ci--4081.3.3--n--750554c5a6-k8s-csi--node--driver--hkm4q-eth0" May 17 00:24:04.527274 containerd[1952]: 2025-05-17 00:24:04.519 [INFO][5664] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0438b3068f0e8e2a9c731a0c39118a21404578f395c9dd424bd6c367b1d06de6" Namespace="calico-system" Pod="csi-node-driver-hkm4q" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-csi--node--driver--hkm4q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--750554c5a6-k8s-csi--node--driver--hkm4q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"403f708a-0226-4cfa-98fa-55326c364f55", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-750554c5a6", ContainerID:"", Pod:"csi-node-driver-hkm4q", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.62.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4a95f1c1ffc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:04.527274 containerd[1952]: 2025-05-17 00:24:04.519 [INFO][5664] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.130/32] ContainerID="0438b3068f0e8e2a9c731a0c39118a21404578f395c9dd424bd6c367b1d06de6" Namespace="calico-system" Pod="csi-node-driver-hkm4q" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-csi--node--driver--hkm4q-eth0" May 17 00:24:04.527274 containerd[1952]: 2025-05-17 00:24:04.519 [INFO][5664] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4a95f1c1ffc ContainerID="0438b3068f0e8e2a9c731a0c39118a21404578f395c9dd424bd6c367b1d06de6" Namespace="calico-system" Pod="csi-node-driver-hkm4q" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-csi--node--driver--hkm4q-eth0" May 17 00:24:04.527274 containerd[1952]: 2025-05-17 00:24:04.520 [INFO][5664] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0438b3068f0e8e2a9c731a0c39118a21404578f395c9dd424bd6c367b1d06de6" Namespace="calico-system" Pod="csi-node-driver-hkm4q" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-csi--node--driver--hkm4q-eth0" May 17 00:24:04.527274 containerd[1952]: 2025-05-17 00:24:04.520 [INFO][5664] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0438b3068f0e8e2a9c731a0c39118a21404578f395c9dd424bd6c367b1d06de6" Namespace="calico-system" Pod="csi-node-driver-hkm4q" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-csi--node--driver--hkm4q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--750554c5a6-k8s-csi--node--driver--hkm4q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"403f708a-0226-4cfa-98fa-55326c364f55", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-750554c5a6", ContainerID:"0438b3068f0e8e2a9c731a0c39118a21404578f395c9dd424bd6c367b1d06de6", Pod:"csi-node-driver-hkm4q", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.62.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4a95f1c1ffc", MAC:"9a:14:13:82:3f:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:04.527274 containerd[1952]: 2025-05-17 00:24:04.525 [INFO][5664] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0438b3068f0e8e2a9c731a0c39118a21404578f395c9dd424bd6c367b1d06de6" Namespace="calico-system" Pod="csi-node-driver-hkm4q" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-csi--node--driver--hkm4q-eth0" May 17 00:24:04.534892 containerd[1952]: time="2025-05-17T00:24:04.534848609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:24:04.534892 containerd[1952]: time="2025-05-17T00:24:04.534879609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:24:04.534892 containerd[1952]: time="2025-05-17T00:24:04.534887878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:04.535041 containerd[1952]: time="2025-05-17T00:24:04.534943153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:04.572257 containerd[1952]: time="2025-05-17T00:24:04.572227273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hkm4q,Uid:403f708a-0226-4cfa-98fa-55326c364f55,Namespace:calico-system,Attempt:1,} returns sandbox id \"0438b3068f0e8e2a9c731a0c39118a21404578f395c9dd424bd6c367b1d06de6\"" May 17 00:24:04.573170 containerd[1952]: time="2025-05-17T00:24:04.573152485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 17 00:24:04.612316 systemd-networkd[1566]: cali02dec70a0e4: Link UP May 17 00:24:04.612636 systemd-networkd[1566]: cali02dec70a0e4: Gained carrier May 17 00:24:04.623057 containerd[1952]: 2025-05-17 00:24:04.466 [INFO][5675] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:24:04.623057 containerd[1952]: 2025-05-17 00:24:04.473 [INFO][5675] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--kmxkn-eth0 coredns-7c65d6cfc9- kube-system 5181617e-229c-477a-82dd-f49d74685250 925 0 2025-05-17 00:23:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.3-n-750554c5a6 coredns-7c65d6cfc9-kmxkn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali02dec70a0e4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6136615dc31c08bb5e80394fe8bb02f784ea770b6ab343da11d54ac1e5ed2e60" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kmxkn" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--kmxkn-" May 17 00:24:04.623057 containerd[1952]: 2025-05-17 00:24:04.473 [INFO][5675] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6136615dc31c08bb5e80394fe8bb02f784ea770b6ab343da11d54ac1e5ed2e60" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kmxkn" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--kmxkn-eth0" May 17 00:24:04.623057 containerd[1952]: 2025-05-17 00:24:04.486 [INFO][5737] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6136615dc31c08bb5e80394fe8bb02f784ea770b6ab343da11d54ac1e5ed2e60" HandleID="k8s-pod-network.6136615dc31c08bb5e80394fe8bb02f784ea770b6ab343da11d54ac1e5ed2e60" Workload="ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--kmxkn-eth0" May 17 00:24:04.623057 containerd[1952]: 2025-05-17 00:24:04.486 [INFO][5737] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6136615dc31c08bb5e80394fe8bb02f784ea770b6ab343da11d54ac1e5ed2e60" HandleID="k8s-pod-network.6136615dc31c08bb5e80394fe8bb02f784ea770b6ab343da11d54ac1e5ed2e60" Workload="ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--kmxkn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011b5e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.3-n-750554c5a6", "pod":"coredns-7c65d6cfc9-kmxkn", "timestamp":"2025-05-17 00:24:04.486843039 +0000 UTC"}, Hostname:"ci-4081.3.3-n-750554c5a6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:24:04.623057 containerd[1952]: 2025-05-17 00:24:04.486 [INFO][5737] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:04.623057 containerd[1952]: 2025-05-17 00:24:04.518 [INFO][5737] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:04.623057 containerd[1952]: 2025-05-17 00:24:04.518 [INFO][5737] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-750554c5a6' May 17 00:24:04.623057 containerd[1952]: 2025-05-17 00:24:04.591 [INFO][5737] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6136615dc31c08bb5e80394fe8bb02f784ea770b6ab343da11d54ac1e5ed2e60" host="ci-4081.3.3-n-750554c5a6" May 17 00:24:04.623057 containerd[1952]: 2025-05-17 00:24:04.594 [INFO][5737] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-750554c5a6" May 17 00:24:04.623057 containerd[1952]: 2025-05-17 00:24:04.597 [INFO][5737] ipam/ipam.go 511: Trying affinity for 192.168.62.128/26 host="ci-4081.3.3-n-750554c5a6" May 17 00:24:04.623057 containerd[1952]: 2025-05-17 00:24:04.598 [INFO][5737] ipam/ipam.go 158: Attempting to load block cidr=192.168.62.128/26 host="ci-4081.3.3-n-750554c5a6" May 17 00:24:04.623057 containerd[1952]: 2025-05-17 00:24:04.600 [INFO][5737] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.62.128/26 host="ci-4081.3.3-n-750554c5a6" May 17 00:24:04.623057 containerd[1952]: 2025-05-17 00:24:04.600 [INFO][5737] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.62.128/26 handle="k8s-pod-network.6136615dc31c08bb5e80394fe8bb02f784ea770b6ab343da11d54ac1e5ed2e60" host="ci-4081.3.3-n-750554c5a6" May 17 00:24:04.623057 containerd[1952]: 2025-05-17 00:24:04.601 [INFO][5737] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6136615dc31c08bb5e80394fe8bb02f784ea770b6ab343da11d54ac1e5ed2e60 May 17 00:24:04.623057 containerd[1952]: 2025-05-17 00:24:04.604 [INFO][5737] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.62.128/26 handle="k8s-pod-network.6136615dc31c08bb5e80394fe8bb02f784ea770b6ab343da11d54ac1e5ed2e60" host="ci-4081.3.3-n-750554c5a6" May 17 00:24:04.623057 containerd[1952]: 2025-05-17 00:24:04.608 [INFO][5737] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.62.131/26] block=192.168.62.128/26 handle="k8s-pod-network.6136615dc31c08bb5e80394fe8bb02f784ea770b6ab343da11d54ac1e5ed2e60" host="ci-4081.3.3-n-750554c5a6" May 17 00:24:04.623057 containerd[1952]: 2025-05-17 00:24:04.608 [INFO][5737] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.62.131/26] handle="k8s-pod-network.6136615dc31c08bb5e80394fe8bb02f784ea770b6ab343da11d54ac1e5ed2e60" host="ci-4081.3.3-n-750554c5a6" May 17 00:24:04.623057 containerd[1952]: 2025-05-17 00:24:04.609 [INFO][5737] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:04.623057 containerd[1952]: 2025-05-17 00:24:04.609 [INFO][5737] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.131/26] IPv6=[] ContainerID="6136615dc31c08bb5e80394fe8bb02f784ea770b6ab343da11d54ac1e5ed2e60" HandleID="k8s-pod-network.6136615dc31c08bb5e80394fe8bb02f784ea770b6ab343da11d54ac1e5ed2e60" Workload="ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--kmxkn-eth0" May 17 00:24:04.624014 containerd[1952]: 2025-05-17 00:24:04.610 [INFO][5675] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6136615dc31c08bb5e80394fe8bb02f784ea770b6ab343da11d54ac1e5ed2e60" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kmxkn" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--kmxkn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--kmxkn-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"5181617e-229c-477a-82dd-f49d74685250", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-750554c5a6", ContainerID:"", Pod:"coredns-7c65d6cfc9-kmxkn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali02dec70a0e4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:04.624014 containerd[1952]: 2025-05-17 00:24:04.610 [INFO][5675] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.131/32] ContainerID="6136615dc31c08bb5e80394fe8bb02f784ea770b6ab343da11d54ac1e5ed2e60" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kmxkn" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--kmxkn-eth0" May 17 00:24:04.624014 containerd[1952]: 2025-05-17 00:24:04.610 [INFO][5675] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali02dec70a0e4 ContainerID="6136615dc31c08bb5e80394fe8bb02f784ea770b6ab343da11d54ac1e5ed2e60" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kmxkn" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--kmxkn-eth0" May 17 00:24:04.624014 containerd[1952]: 2025-05-17 00:24:04.612 [INFO][5675] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6136615dc31c08bb5e80394fe8bb02f784ea770b6ab343da11d54ac1e5ed2e60" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kmxkn" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--kmxkn-eth0" May 17 00:24:04.624014 containerd[1952]: 2025-05-17 00:24:04.613 [INFO][5675] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6136615dc31c08bb5e80394fe8bb02f784ea770b6ab343da11d54ac1e5ed2e60" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kmxkn" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--kmxkn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--kmxkn-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"5181617e-229c-477a-82dd-f49d74685250", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-750554c5a6", ContainerID:"6136615dc31c08bb5e80394fe8bb02f784ea770b6ab343da11d54ac1e5ed2e60", Pod:"coredns-7c65d6cfc9-kmxkn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali02dec70a0e4", MAC:"6a:42:83:0a:f7:ae", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:04.624014 containerd[1952]: 2025-05-17 00:24:04.621 [INFO][5675] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6136615dc31c08bb5e80394fe8bb02f784ea770b6ab343da11d54ac1e5ed2e60" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kmxkn" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--kmxkn-eth0" May 17 00:24:04.633782 containerd[1952]: time="2025-05-17T00:24:04.633500324Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:24:04.633782 containerd[1952]: time="2025-05-17T00:24:04.633742729Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:24:04.633782 containerd[1952]: time="2025-05-17T00:24:04.633751624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:04.633892 containerd[1952]: time="2025-05-17T00:24:04.633796615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:04.697249 containerd[1952]: time="2025-05-17T00:24:04.697188430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kmxkn,Uid:5181617e-229c-477a-82dd-f49d74685250,Namespace:kube-system,Attempt:1,} returns sandbox id \"6136615dc31c08bb5e80394fe8bb02f784ea770b6ab343da11d54ac1e5ed2e60\"" May 17 00:24:04.699474 containerd[1952]: time="2025-05-17T00:24:04.699444911Z" level=info msg="CreateContainer within sandbox \"6136615dc31c08bb5e80394fe8bb02f784ea770b6ab343da11d54ac1e5ed2e60\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:24:04.704017 containerd[1952]: time="2025-05-17T00:24:04.703997707Z" level=info msg="CreateContainer within sandbox \"6136615dc31c08bb5e80394fe8bb02f784ea770b6ab343da11d54ac1e5ed2e60\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ce263e7a0dcc12437c962311de0ac37f9cb961e577831e64361d501b3d63f0b7\"" May 17 00:24:04.704270 containerd[1952]: time="2025-05-17T00:24:04.704256455Z" level=info msg="StartContainer for \"ce263e7a0dcc12437c962311de0ac37f9cb961e577831e64361d501b3d63f0b7\"" May 17 00:24:04.715601 systemd-networkd[1566]: cali4fe2dba4a54: Link UP May 17 00:24:04.715761 systemd-networkd[1566]: cali4fe2dba4a54: Gained carrier May 17 00:24:04.721938 containerd[1952]: 2025-05-17 00:24:04.472 [INFO][5702] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:24:04.721938 containerd[1952]: 2025-05-17 00:24:04.477 [INFO][5702] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--j694m-eth0 calico-apiserver-69b4545c59- calico-apiserver 2bc1c7fb-90f8-4764-96f9-e3c9a5e522e6 926 0 2025-05-17 00:23:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69b4545c59 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-n-750554c5a6 calico-apiserver-69b4545c59-j694m eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4fe2dba4a54 [] [] }} ContainerID="0f416d5d36f5495f52c78930f8d0effd062bbffe0fb3ecd18dc2cfc1c94c87c8" Namespace="calico-apiserver" Pod="calico-apiserver-69b4545c59-j694m" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--j694m-" May 17 00:24:04.721938 containerd[1952]: 2025-05-17 00:24:04.477 [INFO][5702] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0f416d5d36f5495f52c78930f8d0effd062bbffe0fb3ecd18dc2cfc1c94c87c8" Namespace="calico-apiserver" Pod="calico-apiserver-69b4545c59-j694m" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--j694m-eth0" May 17 00:24:04.721938 containerd[1952]: 2025-05-17 00:24:04.489 [INFO][5748] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0f416d5d36f5495f52c78930f8d0effd062bbffe0fb3ecd18dc2cfc1c94c87c8" HandleID="k8s-pod-network.0f416d5d36f5495f52c78930f8d0effd062bbffe0fb3ecd18dc2cfc1c94c87c8" Workload="ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--j694m-eth0" May 17 00:24:04.721938 containerd[1952]: 2025-05-17 00:24:04.489 [INFO][5748] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0f416d5d36f5495f52c78930f8d0effd062bbffe0fb3ecd18dc2cfc1c94c87c8" HandleID="k8s-pod-network.0f416d5d36f5495f52c78930f8d0effd062bbffe0fb3ecd18dc2cfc1c94c87c8" Workload="ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--j694m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005a53c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-n-750554c5a6", "pod":"calico-apiserver-69b4545c59-j694m", "timestamp":"2025-05-17 00:24:04.489805047 +0000 UTC"}, Hostname:"ci-4081.3.3-n-750554c5a6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:24:04.721938 containerd[1952]: 2025-05-17 00:24:04.489 [INFO][5748] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:04.721938 containerd[1952]: 2025-05-17 00:24:04.609 [INFO][5748] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:04.721938 containerd[1952]: 2025-05-17 00:24:04.609 [INFO][5748] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-750554c5a6' May 17 00:24:04.721938 containerd[1952]: 2025-05-17 00:24:04.700 [INFO][5748] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0f416d5d36f5495f52c78930f8d0effd062bbffe0fb3ecd18dc2cfc1c94c87c8" host="ci-4081.3.3-n-750554c5a6" May 17 00:24:04.721938 containerd[1952]: 2025-05-17 00:24:04.703 [INFO][5748] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-750554c5a6" May 17 00:24:04.721938 containerd[1952]: 2025-05-17 00:24:04.705 [INFO][5748] ipam/ipam.go 511: Trying affinity for 192.168.62.128/26 host="ci-4081.3.3-n-750554c5a6" May 17 00:24:04.721938 containerd[1952]: 2025-05-17 00:24:04.706 [INFO][5748] ipam/ipam.go 158: Attempting to load block cidr=192.168.62.128/26 host="ci-4081.3.3-n-750554c5a6" May 17 00:24:04.721938 containerd[1952]: 2025-05-17 00:24:04.707 [INFO][5748] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.62.128/26 host="ci-4081.3.3-n-750554c5a6" May 17 00:24:04.721938 containerd[1952]: 2025-05-17 00:24:04.707 [INFO][5748] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.62.128/26 handle="k8s-pod-network.0f416d5d36f5495f52c78930f8d0effd062bbffe0fb3ecd18dc2cfc1c94c87c8" host="ci-4081.3.3-n-750554c5a6" May 17 00:24:04.721938 containerd[1952]: 2025-05-17 00:24:04.708 [INFO][5748] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0f416d5d36f5495f52c78930f8d0effd062bbffe0fb3ecd18dc2cfc1c94c87c8 May 17 00:24:04.721938 containerd[1952]: 2025-05-17 00:24:04.710 [INFO][5748] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.62.128/26 handle="k8s-pod-network.0f416d5d36f5495f52c78930f8d0effd062bbffe0fb3ecd18dc2cfc1c94c87c8" host="ci-4081.3.3-n-750554c5a6" May 17 00:24:04.721938 containerd[1952]: 2025-05-17 00:24:04.713 [INFO][5748] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.62.132/26] block=192.168.62.128/26 handle="k8s-pod-network.0f416d5d36f5495f52c78930f8d0effd062bbffe0fb3ecd18dc2cfc1c94c87c8" host="ci-4081.3.3-n-750554c5a6" May 17 00:24:04.721938 containerd[1952]: 2025-05-17 00:24:04.713 [INFO][5748] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.62.132/26] handle="k8s-pod-network.0f416d5d36f5495f52c78930f8d0effd062bbffe0fb3ecd18dc2cfc1c94c87c8" host="ci-4081.3.3-n-750554c5a6" May 17 00:24:04.721938 containerd[1952]: 2025-05-17 00:24:04.713 [INFO][5748] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:04.721938 containerd[1952]: 2025-05-17 00:24:04.713 [INFO][5748] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.132/26] IPv6=[] ContainerID="0f416d5d36f5495f52c78930f8d0effd062bbffe0fb3ecd18dc2cfc1c94c87c8" HandleID="k8s-pod-network.0f416d5d36f5495f52c78930f8d0effd062bbffe0fb3ecd18dc2cfc1c94c87c8" Workload="ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--j694m-eth0" May 17 00:24:04.722383 containerd[1952]: 2025-05-17 00:24:04.714 [INFO][5702] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0f416d5d36f5495f52c78930f8d0effd062bbffe0fb3ecd18dc2cfc1c94c87c8" Namespace="calico-apiserver" Pod="calico-apiserver-69b4545c59-j694m" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--j694m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--j694m-eth0", GenerateName:"calico-apiserver-69b4545c59-", Namespace:"calico-apiserver", SelfLink:"", UID:"2bc1c7fb-90f8-4764-96f9-e3c9a5e522e6", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69b4545c59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-750554c5a6", ContainerID:"", Pod:"calico-apiserver-69b4545c59-j694m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4fe2dba4a54", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:04.722383 containerd[1952]: 2025-05-17 00:24:04.714 [INFO][5702] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.132/32] ContainerID="0f416d5d36f5495f52c78930f8d0effd062bbffe0fb3ecd18dc2cfc1c94c87c8" Namespace="calico-apiserver" Pod="calico-apiserver-69b4545c59-j694m" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--j694m-eth0" May 17 00:24:04.722383 containerd[1952]: 2025-05-17 00:24:04.714 [INFO][5702] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4fe2dba4a54 ContainerID="0f416d5d36f5495f52c78930f8d0effd062bbffe0fb3ecd18dc2cfc1c94c87c8" Namespace="calico-apiserver" Pod="calico-apiserver-69b4545c59-j694m" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--j694m-eth0" May 17 00:24:04.722383 containerd[1952]: 2025-05-17 00:24:04.715 [INFO][5702] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0f416d5d36f5495f52c78930f8d0effd062bbffe0fb3ecd18dc2cfc1c94c87c8" Namespace="calico-apiserver" Pod="calico-apiserver-69b4545c59-j694m" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--j694m-eth0" May 17 00:24:04.722383 containerd[1952]: 2025-05-17 00:24:04.715 [INFO][5702] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0f416d5d36f5495f52c78930f8d0effd062bbffe0fb3ecd18dc2cfc1c94c87c8" Namespace="calico-apiserver" Pod="calico-apiserver-69b4545c59-j694m" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--j694m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--j694m-eth0", GenerateName:"calico-apiserver-69b4545c59-", Namespace:"calico-apiserver", SelfLink:"", UID:"2bc1c7fb-90f8-4764-96f9-e3c9a5e522e6", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69b4545c59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-750554c5a6", ContainerID:"0f416d5d36f5495f52c78930f8d0effd062bbffe0fb3ecd18dc2cfc1c94c87c8", Pod:"calico-apiserver-69b4545c59-j694m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4fe2dba4a54", MAC:"82:83:cd:d7:f5:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:04.722383 containerd[1952]: 2025-05-17 00:24:04.720 [INFO][5702] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0f416d5d36f5495f52c78930f8d0effd062bbffe0fb3ecd18dc2cfc1c94c87c8" Namespace="calico-apiserver" Pod="calico-apiserver-69b4545c59-j694m" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--j694m-eth0" May 17 00:24:04.730621 containerd[1952]: time="2025-05-17T00:24:04.730581036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:24:04.730621 containerd[1952]: time="2025-05-17T00:24:04.730613126Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:24:04.730621 containerd[1952]: time="2025-05-17T00:24:04.730622272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:04.730731 containerd[1952]: time="2025-05-17T00:24:04.730668841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:04.733595 containerd[1952]: time="2025-05-17T00:24:04.733571576Z" level=info msg="StartContainer for \"ce263e7a0dcc12437c962311de0ac37f9cb961e577831e64361d501b3d63f0b7\" returns successfully" May 17 00:24:04.758040 containerd[1952]: time="2025-05-17T00:24:04.758018440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b4545c59-j694m,Uid:2bc1c7fb-90f8-4764-96f9-e3c9a5e522e6,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"0f416d5d36f5495f52c78930f8d0effd062bbffe0fb3ecd18dc2cfc1c94c87c8\"" May 17 00:24:05.405047 containerd[1952]: time="2025-05-17T00:24:05.405021467Z" level=info msg="StopPodSandbox for \"55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e\"" May 17 00:24:05.405380 containerd[1952]: time="2025-05-17T00:24:05.405021374Z" level=info msg="StopPodSandbox for \"ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651\"" May 17 00:24:05.441910 containerd[1952]: 2025-05-17 00:24:05.426 [INFO][6050] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e" May 17 00:24:05.441910 containerd[1952]: 2025-05-17 00:24:05.426 [INFO][6050] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e" iface="eth0" netns="/var/run/netns/cni-155d3d19-636a-5289-5779-e5e017fc71c8" May 17 00:24:05.441910 containerd[1952]: 2025-05-17 00:24:05.426 [INFO][6050] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e" iface="eth0" netns="/var/run/netns/cni-155d3d19-636a-5289-5779-e5e017fc71c8" May 17 00:24:05.441910 containerd[1952]: 2025-05-17 00:24:05.426 [INFO][6050] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e" iface="eth0" netns="/var/run/netns/cni-155d3d19-636a-5289-5779-e5e017fc71c8" May 17 00:24:05.441910 containerd[1952]: 2025-05-17 00:24:05.426 [INFO][6050] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e" May 17 00:24:05.441910 containerd[1952]: 2025-05-17 00:24:05.426 [INFO][6050] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e" May 17 00:24:05.441910 containerd[1952]: 2025-05-17 00:24:05.436 [INFO][6079] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e" HandleID="k8s-pod-network.55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e" Workload="ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--mm8pw-eth0" May 17 00:24:05.441910 containerd[1952]: 2025-05-17 00:24:05.436 [INFO][6079] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:05.441910 containerd[1952]: 2025-05-17 00:24:05.436 [INFO][6079] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:05.441910 containerd[1952]: 2025-05-17 00:24:05.439 [WARNING][6079] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e" HandleID="k8s-pod-network.55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e" Workload="ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--mm8pw-eth0" May 17 00:24:05.441910 containerd[1952]: 2025-05-17 00:24:05.439 [INFO][6079] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e" HandleID="k8s-pod-network.55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e" Workload="ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--mm8pw-eth0" May 17 00:24:05.441910 containerd[1952]: 2025-05-17 00:24:05.440 [INFO][6079] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:05.441910 containerd[1952]: 2025-05-17 00:24:05.441 [INFO][6050] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e" May 17 00:24:05.442249 containerd[1952]: time="2025-05-17T00:24:05.441963022Z" level=info msg="TearDown network for sandbox \"55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e\" successfully" May 17 00:24:05.442249 containerd[1952]: time="2025-05-17T00:24:05.441980224Z" level=info msg="StopPodSandbox for \"55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e\" returns successfully" May 17 00:24:05.442356 containerd[1952]: time="2025-05-17T00:24:05.442344495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mm8pw,Uid:ed62d733-94ab-45e2-84ca-07ad82b2634c,Namespace:kube-system,Attempt:1,}" May 17 00:24:05.445739 containerd[1952]: 2025-05-17 00:24:05.427 [INFO][6051] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651" May 17 00:24:05.445739 containerd[1952]: 2025-05-17 00:24:05.427 [INFO][6051] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651" iface="eth0" netns="/var/run/netns/cni-accbe5d1-5c5c-778c-5d5d-62ec1bd4f278" May 17 00:24:05.445739 containerd[1952]: 2025-05-17 00:24:05.427 [INFO][6051] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651" iface="eth0" netns="/var/run/netns/cni-accbe5d1-5c5c-778c-5d5d-62ec1bd4f278" May 17 00:24:05.445739 containerd[1952]: 2025-05-17 00:24:05.427 [INFO][6051] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651" iface="eth0" netns="/var/run/netns/cni-accbe5d1-5c5c-778c-5d5d-62ec1bd4f278" May 17 00:24:05.445739 containerd[1952]: 2025-05-17 00:24:05.427 [INFO][6051] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651" May 17 00:24:05.445739 containerd[1952]: 2025-05-17 00:24:05.427 [INFO][6051] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651" May 17 00:24:05.445739 containerd[1952]: 2025-05-17 00:24:05.436 [INFO][6081] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651" HandleID="k8s-pod-network.ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651" Workload="ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--2hqxd-eth0" May 17 00:24:05.445739 containerd[1952]: 2025-05-17 00:24:05.436 [INFO][6081] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:05.445739 containerd[1952]: 2025-05-17 00:24:05.440 [INFO][6081] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:05.445739 containerd[1952]: 2025-05-17 00:24:05.443 [WARNING][6081] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651" HandleID="k8s-pod-network.ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651" Workload="ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--2hqxd-eth0" May 17 00:24:05.445739 containerd[1952]: 2025-05-17 00:24:05.443 [INFO][6081] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651" HandleID="k8s-pod-network.ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651" Workload="ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--2hqxd-eth0" May 17 00:24:05.445739 containerd[1952]: 2025-05-17 00:24:05.444 [INFO][6081] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:05.445739 containerd[1952]: 2025-05-17 00:24:05.445 [INFO][6051] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651" May 17 00:24:05.445979 containerd[1952]: time="2025-05-17T00:24:05.445813838Z" level=info msg="TearDown network for sandbox \"ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651\" successfully" May 17 00:24:05.445979 containerd[1952]: time="2025-05-17T00:24:05.445824627Z" level=info msg="StopPodSandbox for \"ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651\" returns successfully" May 17 00:24:05.446114 containerd[1952]: time="2025-05-17T00:24:05.446101589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b4545c59-2hqxd,Uid:6d8a08e2-6fb2-41c5-aa91-98552c67cdeb,Namespace:calico-apiserver,Attempt:1,}" May 17 00:24:05.456053 systemd[1]: run-netns-cni\x2d155d3d19\x2d636a\x2d5289\x2d5779\x2de5e017fc71c8.mount: Deactivated successfully. May 17 00:24:05.456169 systemd[1]: run-netns-cni\x2daccbe5d1\x2d5c5c\x2d778c\x2d5d5d\x2d62ec1bd4f278.mount: Deactivated successfully. May 17 00:24:05.499959 systemd-networkd[1566]: cali580bc0191ad: Link UP May 17 00:24:05.500124 systemd-networkd[1566]: cali580bc0191ad: Gained carrier May 17 00:24:05.505441 containerd[1952]: 2025-05-17 00:24:05.459 [INFO][6110] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:24:05.505441 containerd[1952]: 2025-05-17 00:24:05.466 [INFO][6110] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--mm8pw-eth0 coredns-7c65d6cfc9- kube-system ed62d733-94ab-45e2-84ca-07ad82b2634c 947 0 2025-05-17 00:23:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.3-n-750554c5a6 coredns-7c65d6cfc9-mm8pw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali580bc0191ad [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a616171e2cc431d442e3b10efffbeb2eb086b9c4ddf0985607b63f7d4a9a2855" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mm8pw" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--mm8pw-" May 17 00:24:05.505441 containerd[1952]: 2025-05-17 00:24:05.466 [INFO][6110] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a616171e2cc431d442e3b10efffbeb2eb086b9c4ddf0985607b63f7d4a9a2855" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mm8pw" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--mm8pw-eth0" May 17 00:24:05.505441 containerd[1952]: 2025-05-17 00:24:05.478 [INFO][6157] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a616171e2cc431d442e3b10efffbeb2eb086b9c4ddf0985607b63f7d4a9a2855" HandleID="k8s-pod-network.a616171e2cc431d442e3b10efffbeb2eb086b9c4ddf0985607b63f7d4a9a2855" Workload="ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--mm8pw-eth0" May 17 00:24:05.505441 containerd[1952]: 2025-05-17 00:24:05.478 [INFO][6157] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a616171e2cc431d442e3b10efffbeb2eb086b9c4ddf0985607b63f7d4a9a2855" HandleID="k8s-pod-network.a616171e2cc431d442e3b10efffbeb2eb086b9c4ddf0985607b63f7d4a9a2855" Workload="ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--mm8pw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f880), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.3-n-750554c5a6", "pod":"coredns-7c65d6cfc9-mm8pw", "timestamp":"2025-05-17 00:24:05.478499377 +0000 UTC"}, Hostname:"ci-4081.3.3-n-750554c5a6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:24:05.505441 containerd[1952]: 2025-05-17 00:24:05.478 [INFO][6157] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:05.505441 containerd[1952]: 2025-05-17 00:24:05.478 [INFO][6157] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:05.505441 containerd[1952]: 2025-05-17 00:24:05.478 [INFO][6157] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-750554c5a6' May 17 00:24:05.505441 containerd[1952]: 2025-05-17 00:24:05.482 [INFO][6157] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a616171e2cc431d442e3b10efffbeb2eb086b9c4ddf0985607b63f7d4a9a2855" host="ci-4081.3.3-n-750554c5a6" May 17 00:24:05.505441 containerd[1952]: 2025-05-17 00:24:05.486 [INFO][6157] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-750554c5a6" May 17 00:24:05.505441 containerd[1952]: 2025-05-17 00:24:05.489 [INFO][6157] ipam/ipam.go 511: Trying affinity for 192.168.62.128/26 host="ci-4081.3.3-n-750554c5a6" May 17 00:24:05.505441 containerd[1952]: 2025-05-17 00:24:05.490 [INFO][6157] ipam/ipam.go 158: Attempting to load block cidr=192.168.62.128/26 host="ci-4081.3.3-n-750554c5a6" May 17 00:24:05.505441 containerd[1952]: 2025-05-17 00:24:05.491 [INFO][6157] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.62.128/26 host="ci-4081.3.3-n-750554c5a6" May 17 00:24:05.505441 containerd[1952]: 2025-05-17 00:24:05.491 [INFO][6157] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.62.128/26 handle="k8s-pod-network.a616171e2cc431d442e3b10efffbeb2eb086b9c4ddf0985607b63f7d4a9a2855" host="ci-4081.3.3-n-750554c5a6" May 17 00:24:05.505441 containerd[1952]: 2025-05-17 00:24:05.493 [INFO][6157] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a616171e2cc431d442e3b10efffbeb2eb086b9c4ddf0985607b63f7d4a9a2855 May 17 00:24:05.505441 containerd[1952]: 2025-05-17 00:24:05.495 [INFO][6157] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.62.128/26 handle="k8s-pod-network.a616171e2cc431d442e3b10efffbeb2eb086b9c4ddf0985607b63f7d4a9a2855" host="ci-4081.3.3-n-750554c5a6" May 17 00:24:05.505441 containerd[1952]: 2025-05-17 00:24:05.498 [INFO][6157] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.62.133/26] block=192.168.62.128/26 handle="k8s-pod-network.a616171e2cc431d442e3b10efffbeb2eb086b9c4ddf0985607b63f7d4a9a2855" host="ci-4081.3.3-n-750554c5a6" May 17 00:24:05.505441 containerd[1952]: 2025-05-17 00:24:05.498 [INFO][6157] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.62.133/26] handle="k8s-pod-network.a616171e2cc431d442e3b10efffbeb2eb086b9c4ddf0985607b63f7d4a9a2855" host="ci-4081.3.3-n-750554c5a6" May 17 00:24:05.505441 containerd[1952]: 2025-05-17 00:24:05.498 [INFO][6157] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:05.505441 containerd[1952]: 2025-05-17 00:24:05.498 [INFO][6157] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.133/26] IPv6=[] ContainerID="a616171e2cc431d442e3b10efffbeb2eb086b9c4ddf0985607b63f7d4a9a2855" HandleID="k8s-pod-network.a616171e2cc431d442e3b10efffbeb2eb086b9c4ddf0985607b63f7d4a9a2855" Workload="ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--mm8pw-eth0" May 17 00:24:05.505848 containerd[1952]: 2025-05-17 00:24:05.499 [INFO][6110] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a616171e2cc431d442e3b10efffbeb2eb086b9c4ddf0985607b63f7d4a9a2855" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mm8pw" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--mm8pw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--mm8pw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ed62d733-94ab-45e2-84ca-07ad82b2634c", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-750554c5a6", ContainerID:"", Pod:"coredns-7c65d6cfc9-mm8pw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali580bc0191ad", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:05.505848 containerd[1952]: 2025-05-17 00:24:05.499 [INFO][6110] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.133/32] ContainerID="a616171e2cc431d442e3b10efffbeb2eb086b9c4ddf0985607b63f7d4a9a2855" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mm8pw" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--mm8pw-eth0" May 17 00:24:05.505848 containerd[1952]: 2025-05-17 00:24:05.499 [INFO][6110] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali580bc0191ad ContainerID="a616171e2cc431d442e3b10efffbeb2eb086b9c4ddf0985607b63f7d4a9a2855" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mm8pw" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--mm8pw-eth0" May 17 00:24:05.505848 containerd[1952]: 2025-05-17 00:24:05.500 [INFO][6110] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a616171e2cc431d442e3b10efffbeb2eb086b9c4ddf0985607b63f7d4a9a2855" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mm8pw" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--mm8pw-eth0" May 17 00:24:05.505848 containerd[1952]: 2025-05-17 00:24:05.500 [INFO][6110] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a616171e2cc431d442e3b10efffbeb2eb086b9c4ddf0985607b63f7d4a9a2855" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mm8pw" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--mm8pw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--mm8pw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ed62d733-94ab-45e2-84ca-07ad82b2634c", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-750554c5a6", ContainerID:"a616171e2cc431d442e3b10efffbeb2eb086b9c4ddf0985607b63f7d4a9a2855", Pod:"coredns-7c65d6cfc9-mm8pw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali580bc0191ad", MAC:"2e:3f:7e:be:06:cb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:05.505848 containerd[1952]: 2025-05-17 00:24:05.504 [INFO][6110] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a616171e2cc431d442e3b10efffbeb2eb086b9c4ddf0985607b63f7d4a9a2855" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mm8pw" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--mm8pw-eth0" May 17 00:24:05.513726 containerd[1952]: time="2025-05-17T00:24:05.513452810Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:24:05.513726 containerd[1952]: time="2025-05-17T00:24:05.513698394Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:24:05.513726 containerd[1952]: time="2025-05-17T00:24:05.513708253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:05.513834 containerd[1952]: time="2025-05-17T00:24:05.513759206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:05.539302 kubelet[3260]: I0517 00:24:05.539248 3260 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-kmxkn" podStartSLOduration=32.539223664 podStartE2EDuration="32.539223664s" podCreationTimestamp="2025-05-17 00:23:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:24:05.539044864 +0000 UTC m=+38.208017397" watchObservedRunningTime="2025-05-17 00:24:05.539223664 +0000 UTC m=+38.208196322" May 17 00:24:05.558964 containerd[1952]: time="2025-05-17T00:24:05.558942129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mm8pw,Uid:ed62d733-94ab-45e2-84ca-07ad82b2634c,Namespace:kube-system,Attempt:1,} returns sandbox id \"a616171e2cc431d442e3b10efffbeb2eb086b9c4ddf0985607b63f7d4a9a2855\"" May 17 00:24:05.560039 containerd[1952]: time="2025-05-17T00:24:05.560023850Z" level=info msg="CreateContainer within sandbox \"a616171e2cc431d442e3b10efffbeb2eb086b9c4ddf0985607b63f7d4a9a2855\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:24:05.563806 containerd[1952]: time="2025-05-17T00:24:05.563789102Z" level=info msg="CreateContainer within sandbox \"a616171e2cc431d442e3b10efffbeb2eb086b9c4ddf0985607b63f7d4a9a2855\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bb617c916675b7ed2adabb12e94b6a6210766a0606596b27483ce3d6b0ab64bc\"" May 17 00:24:05.564070 containerd[1952]: time="2025-05-17T00:24:05.564056157Z" level=info msg="StartContainer for \"bb617c916675b7ed2adabb12e94b6a6210766a0606596b27483ce3d6b0ab64bc\"" May 17 00:24:05.573758 systemd[1]: Started sshd@10-147.75.202.203:22-14.103.122.89:52058.service - OpenSSH per-connection server daemon (14.103.122.89:52058). May 17 00:24:05.585056 containerd[1952]: time="2025-05-17T00:24:05.585032965Z" level=info msg="StartContainer for \"bb617c916675b7ed2adabb12e94b6a6210766a0606596b27483ce3d6b0ab64bc\" returns successfully" May 17 00:24:05.601343 systemd-networkd[1566]: calif0391eb158c: Link UP May 17 00:24:05.601466 systemd-networkd[1566]: calif0391eb158c: Gained carrier May 17 00:24:05.606757 containerd[1952]: 2025-05-17 00:24:05.459 [INFO][6121] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:24:05.606757 containerd[1952]: 2025-05-17 00:24:05.466 [INFO][6121] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--2hqxd-eth0 calico-apiserver-69b4545c59- calico-apiserver 6d8a08e2-6fb2-41c5-aa91-98552c67cdeb 948 0 2025-05-17 00:23:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69b4545c59 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-n-750554c5a6 calico-apiserver-69b4545c59-2hqxd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif0391eb158c [] [] }} ContainerID="5cf51296f56480ac31dfedda4504bf4c26a41b7324da11f5778c7abf1b25043c" Namespace="calico-apiserver" Pod="calico-apiserver-69b4545c59-2hqxd" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--2hqxd-" May 17 00:24:05.606757 containerd[1952]: 2025-05-17 00:24:05.466 [INFO][6121] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5cf51296f56480ac31dfedda4504bf4c26a41b7324da11f5778c7abf1b25043c" Namespace="calico-apiserver" Pod="calico-apiserver-69b4545c59-2hqxd" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--2hqxd-eth0" May 17 00:24:05.606757 containerd[1952]: 2025-05-17 00:24:05.478 [INFO][6155] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5cf51296f56480ac31dfedda4504bf4c26a41b7324da11f5778c7abf1b25043c" HandleID="k8s-pod-network.5cf51296f56480ac31dfedda4504bf4c26a41b7324da11f5778c7abf1b25043c" Workload="ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--2hqxd-eth0" May 17 00:24:05.606757 containerd[1952]: 2025-05-17 00:24:05.478 [INFO][6155] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5cf51296f56480ac31dfedda4504bf4c26a41b7324da11f5778c7abf1b25043c" HandleID="k8s-pod-network.5cf51296f56480ac31dfedda4504bf4c26a41b7324da11f5778c7abf1b25043c" Workload="ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--2hqxd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000618630), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-n-750554c5a6", "pod":"calico-apiserver-69b4545c59-2hqxd", "timestamp":"2025-05-17 00:24:05.478838395 +0000 UTC"}, Hostname:"ci-4081.3.3-n-750554c5a6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:24:05.606757 containerd[1952]: 2025-05-17 00:24:05.478 [INFO][6155] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:05.606757 containerd[1952]: 2025-05-17 00:24:05.498 [INFO][6155] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:05.606757 containerd[1952]: 2025-05-17 00:24:05.498 [INFO][6155] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-750554c5a6' May 17 00:24:05.606757 containerd[1952]: 2025-05-17 00:24:05.583 [INFO][6155] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5cf51296f56480ac31dfedda4504bf4c26a41b7324da11f5778c7abf1b25043c" host="ci-4081.3.3-n-750554c5a6" May 17 00:24:05.606757 containerd[1952]: 2025-05-17 00:24:05.586 [INFO][6155] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-750554c5a6" May 17 00:24:05.606757 containerd[1952]: 2025-05-17 00:24:05.589 [INFO][6155] ipam/ipam.go 511: Trying affinity for 192.168.62.128/26 host="ci-4081.3.3-n-750554c5a6" May 17 00:24:05.606757 containerd[1952]: 2025-05-17 00:24:05.591 [INFO][6155] ipam/ipam.go 158: Attempting to load block cidr=192.168.62.128/26 host="ci-4081.3.3-n-750554c5a6" May 17 00:24:05.606757 containerd[1952]: 2025-05-17 00:24:05.592 [INFO][6155] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.62.128/26 host="ci-4081.3.3-n-750554c5a6" May 17 00:24:05.606757 containerd[1952]: 2025-05-17 00:24:05.592 [INFO][6155] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.62.128/26 handle="k8s-pod-network.5cf51296f56480ac31dfedda4504bf4c26a41b7324da11f5778c7abf1b25043c" host="ci-4081.3.3-n-750554c5a6" May 17 00:24:05.606757 containerd[1952]: 2025-05-17 00:24:05.593 [INFO][6155] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5cf51296f56480ac31dfedda4504bf4c26a41b7324da11f5778c7abf1b25043c May 17 00:24:05.606757 containerd[1952]: 2025-05-17 00:24:05.595 [INFO][6155] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.62.128/26 handle="k8s-pod-network.5cf51296f56480ac31dfedda4504bf4c26a41b7324da11f5778c7abf1b25043c" host="ci-4081.3.3-n-750554c5a6" May 17 00:24:05.606757 containerd[1952]: 2025-05-17 00:24:05.599 [INFO][6155] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.62.134/26] block=192.168.62.128/26 handle="k8s-pod-network.5cf51296f56480ac31dfedda4504bf4c26a41b7324da11f5778c7abf1b25043c" host="ci-4081.3.3-n-750554c5a6" May 17 00:24:05.606757 containerd[1952]: 2025-05-17 00:24:05.599 [INFO][6155] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.62.134/26] handle="k8s-pod-network.5cf51296f56480ac31dfedda4504bf4c26a41b7324da11f5778c7abf1b25043c" host="ci-4081.3.3-n-750554c5a6" May 17 00:24:05.606757 containerd[1952]: 2025-05-17 00:24:05.599 [INFO][6155] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:05.606757 containerd[1952]: 2025-05-17 00:24:05.599 [INFO][6155] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.134/26] IPv6=[] ContainerID="5cf51296f56480ac31dfedda4504bf4c26a41b7324da11f5778c7abf1b25043c" HandleID="k8s-pod-network.5cf51296f56480ac31dfedda4504bf4c26a41b7324da11f5778c7abf1b25043c" Workload="ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--2hqxd-eth0" May 17 00:24:05.607143 containerd[1952]: 2025-05-17 00:24:05.600 [INFO][6121] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5cf51296f56480ac31dfedda4504bf4c26a41b7324da11f5778c7abf1b25043c" Namespace="calico-apiserver" Pod="calico-apiserver-69b4545c59-2hqxd" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--2hqxd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--2hqxd-eth0", GenerateName:"calico-apiserver-69b4545c59-", Namespace:"calico-apiserver", SelfLink:"", UID:"6d8a08e2-6fb2-41c5-aa91-98552c67cdeb", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69b4545c59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-750554c5a6", ContainerID:"", Pod:"calico-apiserver-69b4545c59-2hqxd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif0391eb158c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:05.607143 containerd[1952]: 2025-05-17 00:24:05.600 [INFO][6121] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.134/32] ContainerID="5cf51296f56480ac31dfedda4504bf4c26a41b7324da11f5778c7abf1b25043c" Namespace="calico-apiserver" Pod="calico-apiserver-69b4545c59-2hqxd" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--2hqxd-eth0" May 17 00:24:05.607143 containerd[1952]: 2025-05-17 00:24:05.600 [INFO][6121] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif0391eb158c ContainerID="5cf51296f56480ac31dfedda4504bf4c26a41b7324da11f5778c7abf1b25043c" Namespace="calico-apiserver" Pod="calico-apiserver-69b4545c59-2hqxd" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--2hqxd-eth0" May 17 00:24:05.607143 containerd[1952]: 2025-05-17 00:24:05.601 [INFO][6121] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5cf51296f56480ac31dfedda4504bf4c26a41b7324da11f5778c7abf1b25043c" Namespace="calico-apiserver" Pod="calico-apiserver-69b4545c59-2hqxd" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--2hqxd-eth0" May 17 00:24:05.607143 containerd[1952]: 2025-05-17 00:24:05.601 [INFO][6121] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5cf51296f56480ac31dfedda4504bf4c26a41b7324da11f5778c7abf1b25043c" Namespace="calico-apiserver" Pod="calico-apiserver-69b4545c59-2hqxd" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--2hqxd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--2hqxd-eth0", GenerateName:"calico-apiserver-69b4545c59-", Namespace:"calico-apiserver", SelfLink:"", UID:"6d8a08e2-6fb2-41c5-aa91-98552c67cdeb", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69b4545c59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-750554c5a6", ContainerID:"5cf51296f56480ac31dfedda4504bf4c26a41b7324da11f5778c7abf1b25043c", Pod:"calico-apiserver-69b4545c59-2hqxd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif0391eb158c", MAC:"d2:cf:a6:e9:90:fd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:05.607143 containerd[1952]: 2025-05-17 00:24:05.605 [INFO][6121] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5cf51296f56480ac31dfedda4504bf4c26a41b7324da11f5778c7abf1b25043c" Namespace="calico-apiserver" Pod="calico-apiserver-69b4545c59-2hqxd" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--2hqxd-eth0" May 17 00:24:05.615417 containerd[1952]: time="2025-05-17T00:24:05.615140651Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:24:05.615417 containerd[1952]: time="2025-05-17T00:24:05.615342581Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:24:05.615417 containerd[1952]: time="2025-05-17T00:24:05.615350719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:05.615516 containerd[1952]: time="2025-05-17T00:24:05.615428870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:05.653030 containerd[1952]: time="2025-05-17T00:24:05.652978092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b4545c59-2hqxd,Uid:6d8a08e2-6fb2-41c5-aa91-98552c67cdeb,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5cf51296f56480ac31dfedda4504bf4c26a41b7324da11f5778c7abf1b25043c\"" May 17 00:24:05.785727 systemd-networkd[1566]: cali4fe2dba4a54: Gained IPv6LL May 17 00:24:06.002958 containerd[1952]: time="2025-05-17T00:24:06.002903138Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:06.003165 containerd[1952]: time="2025-05-17T00:24:06.003128211Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8758390" May 17 00:24:06.003519 containerd[1952]: time="2025-05-17T00:24:06.003483272Z" level=info msg="ImageCreate event name:\"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:06.004413 containerd[1952]: time="2025-05-17T00:24:06.004378896Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:06.004840 containerd[1952]: time="2025-05-17T00:24:06.004804989Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"10251093\" in 1.431630249s" May 17 00:24:06.004840 containerd[1952]: time="2025-05-17T00:24:06.004820208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 17 00:24:06.005578 containerd[1952]: time="2025-05-17T00:24:06.005552320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:24:06.006378 containerd[1952]: time="2025-05-17T00:24:06.006364906Z" level=info msg="CreateContainer within sandbox \"0438b3068f0e8e2a9c731a0c39118a21404578f395c9dd424bd6c367b1d06de6\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 17 00:24:06.011316 containerd[1952]: time="2025-05-17T00:24:06.011302555Z" level=info msg="CreateContainer within sandbox \"0438b3068f0e8e2a9c731a0c39118a21404578f395c9dd424bd6c367b1d06de6\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8e3e6574c3616776d7da4eef98b80d0806afedcb4abdace851afeb8d273076d3\"" May 17 00:24:06.011680 containerd[1952]: time="2025-05-17T00:24:06.011623038Z" level=info msg="StartContainer for \"8e3e6574c3616776d7da4eef98b80d0806afedcb4abdace851afeb8d273076d3\"" May 17 00:24:06.047061 containerd[1952]: time="2025-05-17T00:24:06.047005712Z" level=info msg="StartContainer for \"8e3e6574c3616776d7da4eef98b80d0806afedcb4abdace851afeb8d273076d3\" returns successfully" May 17 00:24:06.169737 systemd-networkd[1566]: cali4a95f1c1ffc: Gained IPv6LL May 17 00:24:06.169950 systemd-networkd[1566]: cali02dec70a0e4: Gained IPv6LL May 17 00:24:06.404437 containerd[1952]: time="2025-05-17T00:24:06.404373478Z" level=info msg="StopPodSandbox for \"35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85\"" May 17 00:24:06.449497 containerd[1952]: 2025-05-17 00:24:06.432 [INFO][6408] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85" May 17 00:24:06.449497 containerd[1952]: 2025-05-17 00:24:06.432 [INFO][6408] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85" iface="eth0" netns="/var/run/netns/cni-3c383edd-1cd9-e95a-c6c9-2b5f3df85161" May 17 00:24:06.449497 containerd[1952]: 2025-05-17 00:24:06.432 [INFO][6408] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85" iface="eth0" netns="/var/run/netns/cni-3c383edd-1cd9-e95a-c6c9-2b5f3df85161" May 17 00:24:06.449497 containerd[1952]: 2025-05-17 00:24:06.433 [INFO][6408] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85" iface="eth0" netns="/var/run/netns/cni-3c383edd-1cd9-e95a-c6c9-2b5f3df85161" May 17 00:24:06.449497 containerd[1952]: 2025-05-17 00:24:06.433 [INFO][6408] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85" May 17 00:24:06.449497 containerd[1952]: 2025-05-17 00:24:06.433 [INFO][6408] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85" May 17 00:24:06.449497 containerd[1952]: 2025-05-17 00:24:06.443 [INFO][6448] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85" HandleID="k8s-pod-network.35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85" Workload="ci--4081.3.3--n--750554c5a6-k8s-goldmane--8f77d7b6c--jr8s4-eth0" May 17 00:24:06.449497 containerd[1952]: 2025-05-17 00:24:06.443 [INFO][6448] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:06.449497 containerd[1952]: 2025-05-17 00:24:06.443 [INFO][6448] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:06.449497 containerd[1952]: 2025-05-17 00:24:06.447 [WARNING][6448] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85" HandleID="k8s-pod-network.35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85" Workload="ci--4081.3.3--n--750554c5a6-k8s-goldmane--8f77d7b6c--jr8s4-eth0" May 17 00:24:06.449497 containerd[1952]: 2025-05-17 00:24:06.447 [INFO][6448] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85" HandleID="k8s-pod-network.35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85" Workload="ci--4081.3.3--n--750554c5a6-k8s-goldmane--8f77d7b6c--jr8s4-eth0" May 17 00:24:06.449497 containerd[1952]: 2025-05-17 00:24:06.448 [INFO][6448] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:06.449497 containerd[1952]: 2025-05-17 00:24:06.448 [INFO][6408] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85" May 17 00:24:06.449956 containerd[1952]: time="2025-05-17T00:24:06.449599986Z" level=info msg="TearDown network for sandbox \"35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85\" successfully" May 17 00:24:06.449956 containerd[1952]: time="2025-05-17T00:24:06.449625337Z" level=info msg="StopPodSandbox for \"35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85\" returns successfully" May 17 00:24:06.450057 containerd[1952]: time="2025-05-17T00:24:06.450043607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-jr8s4,Uid:307fc931-df0d-46ad-a0f5-9c642c960ef0,Namespace:calico-system,Attempt:1,}" May 17 00:24:06.453654 systemd[1]: run-netns-cni\x2d3c383edd\x2d1cd9\x2de95a\x2dc6c9\x2d2b5f3df85161.mount: Deactivated successfully. May 17 00:24:06.505268 systemd-networkd[1566]: cali60e3c7769ed: Link UP May 17 00:24:06.505425 systemd-networkd[1566]: cali60e3c7769ed: Gained carrier May 17 00:24:06.511753 containerd[1952]: 2025-05-17 00:24:06.463 [INFO][6462] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:24:06.511753 containerd[1952]: 2025-05-17 00:24:06.469 [INFO][6462] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--750554c5a6-k8s-goldmane--8f77d7b6c--jr8s4-eth0 goldmane-8f77d7b6c- calico-system 307fc931-df0d-46ad-a0f5-9c642c960ef0 974 0 2025-05-17 00:23:42 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:8f77d7b6c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.3-n-750554c5a6 goldmane-8f77d7b6c-jr8s4 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali60e3c7769ed [] [] }} ContainerID="ec08f7541e7620580733f1a221a78caa35d8705f6ed984b978310eadfd694f21" Namespace="calico-system" Pod="goldmane-8f77d7b6c-jr8s4" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-goldmane--8f77d7b6c--jr8s4-" May 17 00:24:06.511753 containerd[1952]: 2025-05-17 00:24:06.469 [INFO][6462] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ec08f7541e7620580733f1a221a78caa35d8705f6ed984b978310eadfd694f21" Namespace="calico-system" Pod="goldmane-8f77d7b6c-jr8s4" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-goldmane--8f77d7b6c--jr8s4-eth0" May 17 00:24:06.511753 containerd[1952]: 2025-05-17 00:24:06.482 [INFO][6482] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ec08f7541e7620580733f1a221a78caa35d8705f6ed984b978310eadfd694f21" HandleID="k8s-pod-network.ec08f7541e7620580733f1a221a78caa35d8705f6ed984b978310eadfd694f21" Workload="ci--4081.3.3--n--750554c5a6-k8s-goldmane--8f77d7b6c--jr8s4-eth0" May 17 00:24:06.511753 containerd[1952]: 2025-05-17 00:24:06.483 [INFO][6482] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ec08f7541e7620580733f1a221a78caa35d8705f6ed984b978310eadfd694f21" HandleID="k8s-pod-network.ec08f7541e7620580733f1a221a78caa35d8705f6ed984b978310eadfd694f21" Workload="ci--4081.3.3--n--750554c5a6-k8s-goldmane--8f77d7b6c--jr8s4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00068e8e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-n-750554c5a6", "pod":"goldmane-8f77d7b6c-jr8s4", "timestamp":"2025-05-17 00:24:06.482986595 +0000 UTC"}, Hostname:"ci-4081.3.3-n-750554c5a6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:24:06.511753 containerd[1952]: 2025-05-17 00:24:06.483 [INFO][6482] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:06.511753 containerd[1952]: 2025-05-17 00:24:06.483 [INFO][6482] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:06.511753 containerd[1952]: 2025-05-17 00:24:06.483 [INFO][6482] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-750554c5a6' May 17 00:24:06.511753 containerd[1952]: 2025-05-17 00:24:06.487 [INFO][6482] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ec08f7541e7620580733f1a221a78caa35d8705f6ed984b978310eadfd694f21" host="ci-4081.3.3-n-750554c5a6" May 17 00:24:06.511753 containerd[1952]: 2025-05-17 00:24:06.491 [INFO][6482] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-750554c5a6" May 17 00:24:06.511753 containerd[1952]: 2025-05-17 00:24:06.494 [INFO][6482] ipam/ipam.go 511: Trying affinity for 192.168.62.128/26 host="ci-4081.3.3-n-750554c5a6" May 17 00:24:06.511753 containerd[1952]: 2025-05-17 00:24:06.495 [INFO][6482] ipam/ipam.go 158: Attempting to load block cidr=192.168.62.128/26 host="ci-4081.3.3-n-750554c5a6" May 17 00:24:06.511753 containerd[1952]: 2025-05-17 00:24:06.497 [INFO][6482] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.62.128/26 host="ci-4081.3.3-n-750554c5a6" May 17 00:24:06.511753 containerd[1952]: 2025-05-17 00:24:06.497 [INFO][6482] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.62.128/26 handle="k8s-pod-network.ec08f7541e7620580733f1a221a78caa35d8705f6ed984b978310eadfd694f21" host="ci-4081.3.3-n-750554c5a6" May 17 00:24:06.511753 containerd[1952]: 2025-05-17 00:24:06.498 [INFO][6482] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ec08f7541e7620580733f1a221a78caa35d8705f6ed984b978310eadfd694f21 May 17 00:24:06.511753 containerd[1952]: 2025-05-17 00:24:06.501 [INFO][6482] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.62.128/26 handle="k8s-pod-network.ec08f7541e7620580733f1a221a78caa35d8705f6ed984b978310eadfd694f21" host="ci-4081.3.3-n-750554c5a6" May 17 00:24:06.511753 containerd[1952]: 2025-05-17 00:24:06.503 [INFO][6482] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.62.135/26] block=192.168.62.128/26 handle="k8s-pod-network.ec08f7541e7620580733f1a221a78caa35d8705f6ed984b978310eadfd694f21" host="ci-4081.3.3-n-750554c5a6" May 17 00:24:06.511753 containerd[1952]: 2025-05-17 00:24:06.503 [INFO][6482] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.62.135/26] handle="k8s-pod-network.ec08f7541e7620580733f1a221a78caa35d8705f6ed984b978310eadfd694f21" host="ci-4081.3.3-n-750554c5a6" May 17 00:24:06.511753 containerd[1952]: 2025-05-17 00:24:06.503 [INFO][6482] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:06.511753 containerd[1952]: 2025-05-17 00:24:06.503 [INFO][6482] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.135/26] IPv6=[] ContainerID="ec08f7541e7620580733f1a221a78caa35d8705f6ed984b978310eadfd694f21" HandleID="k8s-pod-network.ec08f7541e7620580733f1a221a78caa35d8705f6ed984b978310eadfd694f21" Workload="ci--4081.3.3--n--750554c5a6-k8s-goldmane--8f77d7b6c--jr8s4-eth0" May 17 00:24:06.512170 containerd[1952]: 2025-05-17 00:24:06.504 [INFO][6462] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ec08f7541e7620580733f1a221a78caa35d8705f6ed984b978310eadfd694f21" Namespace="calico-system" Pod="goldmane-8f77d7b6c-jr8s4" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-goldmane--8f77d7b6c--jr8s4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--750554c5a6-k8s-goldmane--8f77d7b6c--jr8s4-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"307fc931-df0d-46ad-a0f5-9c642c960ef0", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-750554c5a6", ContainerID:"", Pod:"goldmane-8f77d7b6c-jr8s4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.62.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali60e3c7769ed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:06.512170 containerd[1952]: 2025-05-17 00:24:06.504 [INFO][6462] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.135/32] ContainerID="ec08f7541e7620580733f1a221a78caa35d8705f6ed984b978310eadfd694f21" Namespace="calico-system" Pod="goldmane-8f77d7b6c-jr8s4" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-goldmane--8f77d7b6c--jr8s4-eth0" May 17 00:24:06.512170 containerd[1952]: 2025-05-17 00:24:06.504 [INFO][6462] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e3c7769ed ContainerID="ec08f7541e7620580733f1a221a78caa35d8705f6ed984b978310eadfd694f21" Namespace="calico-system" Pod="goldmane-8f77d7b6c-jr8s4" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-goldmane--8f77d7b6c--jr8s4-eth0" May 17 00:24:06.512170 containerd[1952]: 2025-05-17 00:24:06.505 [INFO][6462] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ec08f7541e7620580733f1a221a78caa35d8705f6ed984b978310eadfd694f21" Namespace="calico-system" Pod="goldmane-8f77d7b6c-jr8s4" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-goldmane--8f77d7b6c--jr8s4-eth0" May 17 00:24:06.512170 containerd[1952]: 2025-05-17 00:24:06.505 [INFO][6462] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ec08f7541e7620580733f1a221a78caa35d8705f6ed984b978310eadfd694f21" Namespace="calico-system" Pod="goldmane-8f77d7b6c-jr8s4" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-goldmane--8f77d7b6c--jr8s4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--750554c5a6-k8s-goldmane--8f77d7b6c--jr8s4-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"307fc931-df0d-46ad-a0f5-9c642c960ef0", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-750554c5a6", ContainerID:"ec08f7541e7620580733f1a221a78caa35d8705f6ed984b978310eadfd694f21", Pod:"goldmane-8f77d7b6c-jr8s4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.62.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali60e3c7769ed", MAC:"e6:c5:87:a7:78:c0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:06.512170 containerd[1952]: 2025-05-17 00:24:06.510 [INFO][6462] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ec08f7541e7620580733f1a221a78caa35d8705f6ed984b978310eadfd694f21" Namespace="calico-system" Pod="goldmane-8f77d7b6c-jr8s4" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-goldmane--8f77d7b6c--jr8s4-eth0" May 17 00:24:06.519736 containerd[1952]: time="2025-05-17T00:24:06.519649548Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:24:06.519736 containerd[1952]: time="2025-05-17T00:24:06.519692980Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:24:06.519942 containerd[1952]: time="2025-05-17T00:24:06.519885085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:06.519972 containerd[1952]: time="2025-05-17T00:24:06.519940465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:06.542231 kubelet[3260]: I0517 00:24:06.542198 3260 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-mm8pw" podStartSLOduration=33.542182791 podStartE2EDuration="33.542182791s" podCreationTimestamp="2025-05-17 00:23:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:24:06.541808407 +0000 UTC m=+39.210780945" watchObservedRunningTime="2025-05-17 00:24:06.542182791 +0000 UTC m=+39.211155323" May 17 00:24:06.553624 systemd-networkd[1566]: cali580bc0191ad: Gained IPv6LL May 17 00:24:06.568686 containerd[1952]: time="2025-05-17T00:24:06.568632262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-jr8s4,Uid:307fc931-df0d-46ad-a0f5-9c642c960ef0,Namespace:calico-system,Attempt:1,} returns sandbox id \"ec08f7541e7620580733f1a221a78caa35d8705f6ed984b978310eadfd694f21\"" May 17 00:24:06.938067 systemd-networkd[1566]: calif0391eb158c: Gained IPv6LL May 17 00:24:07.019001 sshd[6239]: Invalid user zjw from 14.103.122.89 port 52058 May 17 00:24:07.166073 sshd[6239]: Received disconnect from 14.103.122.89 port 52058:11: Bye Bye [preauth] May 17 00:24:07.166073 sshd[6239]: Disconnected from invalid user zjw 14.103.122.89 port 52058 [preauth] May 17 00:24:07.167882 systemd[1]: sshd@10-147.75.202.203:22-14.103.122.89:52058.service: Deactivated successfully. May 17 00:24:07.404693 containerd[1952]: time="2025-05-17T00:24:07.404628056Z" level=info msg="StopPodSandbox for \"6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35\"" May 17 00:24:07.441438 containerd[1952]: 2025-05-17 00:24:07.425 [INFO][6563] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35" May 17 00:24:07.441438 containerd[1952]: 2025-05-17 00:24:07.425 [INFO][6563] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35" iface="eth0" netns="/var/run/netns/cni-b003e7e9-4d77-93d4-8b08-6eb89d2a73f4" May 17 00:24:07.441438 containerd[1952]: 2025-05-17 00:24:07.425 [INFO][6563] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35" iface="eth0" netns="/var/run/netns/cni-b003e7e9-4d77-93d4-8b08-6eb89d2a73f4" May 17 00:24:07.441438 containerd[1952]: 2025-05-17 00:24:07.425 [INFO][6563] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35" iface="eth0" netns="/var/run/netns/cni-b003e7e9-4d77-93d4-8b08-6eb89d2a73f4" May 17 00:24:07.441438 containerd[1952]: 2025-05-17 00:24:07.425 [INFO][6563] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35" May 17 00:24:07.441438 containerd[1952]: 2025-05-17 00:24:07.425 [INFO][6563] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35" May 17 00:24:07.441438 containerd[1952]: 2025-05-17 00:24:07.435 [INFO][6582] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35" HandleID="k8s-pod-network.6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35" Workload="ci--4081.3.3--n--750554c5a6-k8s-calico--kube--controllers--5cb9df59f4--w67vm-eth0" May 17 00:24:07.441438 containerd[1952]: 2025-05-17 00:24:07.435 [INFO][6582] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:07.441438 containerd[1952]: 2025-05-17 00:24:07.435 [INFO][6582] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:07.441438 containerd[1952]: 2025-05-17 00:24:07.438 [WARNING][6582] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35" HandleID="k8s-pod-network.6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35" Workload="ci--4081.3.3--n--750554c5a6-k8s-calico--kube--controllers--5cb9df59f4--w67vm-eth0" May 17 00:24:07.441438 containerd[1952]: 2025-05-17 00:24:07.438 [INFO][6582] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35" HandleID="k8s-pod-network.6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35" Workload="ci--4081.3.3--n--750554c5a6-k8s-calico--kube--controllers--5cb9df59f4--w67vm-eth0" May 17 00:24:07.441438 containerd[1952]: 2025-05-17 00:24:07.439 [INFO][6582] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:07.441438 containerd[1952]: 2025-05-17 00:24:07.440 [INFO][6563] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35" May 17 00:24:07.441747 containerd[1952]: time="2025-05-17T00:24:07.441555498Z" level=info msg="TearDown network for sandbox \"6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35\" successfully" May 17 00:24:07.441747 containerd[1952]: time="2025-05-17T00:24:07.441575664Z" level=info msg="StopPodSandbox for \"6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35\" returns successfully" May 17 00:24:07.441970 containerd[1952]: time="2025-05-17T00:24:07.441954930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cb9df59f4-w67vm,Uid:7c71ecf2-a673-4248-97b2-14f7ed1d6b2c,Namespace:calico-system,Attempt:1,}" May 17 00:24:07.451580 systemd[1]: run-netns-cni\x2db003e7e9\x2d4d77\x2d93d4\x2d8b08\x2d6eb89d2a73f4.mount: Deactivated successfully. May 17 00:24:07.575430 systemd-networkd[1566]: cali2f4fa8ff320: Link UP May 17 00:24:07.575579 systemd-networkd[1566]: cali2f4fa8ff320: Gained carrier May 17 00:24:07.581832 containerd[1952]: 2025-05-17 00:24:07.520 [INFO][6647] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:24:07.581832 containerd[1952]: 2025-05-17 00:24:07.527 [INFO][6647] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--750554c5a6-k8s-calico--kube--controllers--5cb9df59f4--w67vm-eth0 calico-kube-controllers-5cb9df59f4- calico-system 7c71ecf2-a673-4248-97b2-14f7ed1d6b2c 990 0 2025-05-17 00:23:43 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5cb9df59f4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.3-n-750554c5a6 calico-kube-controllers-5cb9df59f4-w67vm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2f4fa8ff320 [] [] }} ContainerID="c91d947efdf3366239c1f4ce936a88a408e261fb0b3db5fd726046d0446a7d71" Namespace="calico-system" Pod="calico-kube-controllers-5cb9df59f4-w67vm" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-calico--kube--controllers--5cb9df59f4--w67vm-" May 17 00:24:07.581832 containerd[1952]: 2025-05-17 00:24:07.527 [INFO][6647] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c91d947efdf3366239c1f4ce936a88a408e261fb0b3db5fd726046d0446a7d71" Namespace="calico-system" Pod="calico-kube-controllers-5cb9df59f4-w67vm" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-calico--kube--controllers--5cb9df59f4--w67vm-eth0" May 17 00:24:07.581832 containerd[1952]: 2025-05-17 00:24:07.541 [INFO][6671] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c91d947efdf3366239c1f4ce936a88a408e261fb0b3db5fd726046d0446a7d71" HandleID="k8s-pod-network.c91d947efdf3366239c1f4ce936a88a408e261fb0b3db5fd726046d0446a7d71" Workload="ci--4081.3.3--n--750554c5a6-k8s-calico--kube--controllers--5cb9df59f4--w67vm-eth0" May 17 00:24:07.581832 containerd[1952]: 2025-05-17 00:24:07.541 [INFO][6671] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c91d947efdf3366239c1f4ce936a88a408e261fb0b3db5fd726046d0446a7d71" HandleID="k8s-pod-network.c91d947efdf3366239c1f4ce936a88a408e261fb0b3db5fd726046d0446a7d71" Workload="ci--4081.3.3--n--750554c5a6-k8s-calico--kube--controllers--5cb9df59f4--w67vm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003d7f20), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-n-750554c5a6", "pod":"calico-kube-controllers-5cb9df59f4-w67vm", "timestamp":"2025-05-17 00:24:07.541049302 +0000 UTC"}, Hostname:"ci-4081.3.3-n-750554c5a6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:24:07.581832 containerd[1952]: 2025-05-17 00:24:07.541 [INFO][6671] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:07.581832 containerd[1952]: 2025-05-17 00:24:07.541 [INFO][6671] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:07.581832 containerd[1952]: 2025-05-17 00:24:07.541 [INFO][6671] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-750554c5a6' May 17 00:24:07.581832 containerd[1952]: 2025-05-17 00:24:07.544 [INFO][6671] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c91d947efdf3366239c1f4ce936a88a408e261fb0b3db5fd726046d0446a7d71" host="ci-4081.3.3-n-750554c5a6" May 17 00:24:07.581832 containerd[1952]: 2025-05-17 00:24:07.547 [INFO][6671] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-750554c5a6" May 17 00:24:07.581832 containerd[1952]: 2025-05-17 00:24:07.549 [INFO][6671] ipam/ipam.go 511: Trying affinity for 192.168.62.128/26 host="ci-4081.3.3-n-750554c5a6" May 17 00:24:07.581832 containerd[1952]: 2025-05-17 00:24:07.550 [INFO][6671] ipam/ipam.go 158: Attempting to load block cidr=192.168.62.128/26 host="ci-4081.3.3-n-750554c5a6" May 17 00:24:07.581832 containerd[1952]: 2025-05-17 00:24:07.551 [INFO][6671] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.62.128/26 host="ci-4081.3.3-n-750554c5a6" May 17 00:24:07.581832 containerd[1952]: 2025-05-17 00:24:07.551 [INFO][6671] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.62.128/26 handle="k8s-pod-network.c91d947efdf3366239c1f4ce936a88a408e261fb0b3db5fd726046d0446a7d71" host="ci-4081.3.3-n-750554c5a6" May 17 00:24:07.581832 containerd[1952]: 2025-05-17 00:24:07.552 [INFO][6671] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c91d947efdf3366239c1f4ce936a88a408e261fb0b3db5fd726046d0446a7d71 May 17 00:24:07.581832 containerd[1952]: 2025-05-17 00:24:07.556 [INFO][6671] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.62.128/26 handle="k8s-pod-network.c91d947efdf3366239c1f4ce936a88a408e261fb0b3db5fd726046d0446a7d71" host="ci-4081.3.3-n-750554c5a6" May 17 00:24:07.581832 containerd[1952]: 2025-05-17 00:24:07.573 [INFO][6671] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.62.136/26] block=192.168.62.128/26 handle="k8s-pod-network.c91d947efdf3366239c1f4ce936a88a408e261fb0b3db5fd726046d0446a7d71" host="ci-4081.3.3-n-750554c5a6" May 17 00:24:07.581832 containerd[1952]: 2025-05-17 00:24:07.573 [INFO][6671] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.62.136/26] handle="k8s-pod-network.c91d947efdf3366239c1f4ce936a88a408e261fb0b3db5fd726046d0446a7d71" host="ci-4081.3.3-n-750554c5a6" May 17 00:24:07.581832 containerd[1952]: 2025-05-17 00:24:07.573 [INFO][6671] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:07.581832 containerd[1952]: 2025-05-17 00:24:07.573 [INFO][6671] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.136/26] IPv6=[] ContainerID="c91d947efdf3366239c1f4ce936a88a408e261fb0b3db5fd726046d0446a7d71" HandleID="k8s-pod-network.c91d947efdf3366239c1f4ce936a88a408e261fb0b3db5fd726046d0446a7d71" Workload="ci--4081.3.3--n--750554c5a6-k8s-calico--kube--controllers--5cb9df59f4--w67vm-eth0" May 17 00:24:07.582393 containerd[1952]: 2025-05-17 00:24:07.574 [INFO][6647] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c91d947efdf3366239c1f4ce936a88a408e261fb0b3db5fd726046d0446a7d71" Namespace="calico-system" Pod="calico-kube-controllers-5cb9df59f4-w67vm" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-calico--kube--controllers--5cb9df59f4--w67vm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--750554c5a6-k8s-calico--kube--controllers--5cb9df59f4--w67vm-eth0", GenerateName:"calico-kube-controllers-5cb9df59f4-", Namespace:"calico-system", SelfLink:"", UID:"7c71ecf2-a673-4248-97b2-14f7ed1d6b2c", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5cb9df59f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-750554c5a6", ContainerID:"", Pod:"calico-kube-controllers-5cb9df59f4-w67vm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.62.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2f4fa8ff320", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:07.582393 containerd[1952]: 2025-05-17 00:24:07.574 [INFO][6647] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.136/32] ContainerID="c91d947efdf3366239c1f4ce936a88a408e261fb0b3db5fd726046d0446a7d71" Namespace="calico-system" Pod="calico-kube-controllers-5cb9df59f4-w67vm" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-calico--kube--controllers--5cb9df59f4--w67vm-eth0" May 17 00:24:07.582393 containerd[1952]: 2025-05-17 00:24:07.574 [INFO][6647] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2f4fa8ff320 ContainerID="c91d947efdf3366239c1f4ce936a88a408e261fb0b3db5fd726046d0446a7d71" Namespace="calico-system" Pod="calico-kube-controllers-5cb9df59f4-w67vm" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-calico--kube--controllers--5cb9df59f4--w67vm-eth0" May 17 00:24:07.582393 containerd[1952]: 2025-05-17 00:24:07.575 [INFO][6647] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c91d947efdf3366239c1f4ce936a88a408e261fb0b3db5fd726046d0446a7d71" Namespace="calico-system" Pod="calico-kube-controllers-5cb9df59f4-w67vm" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-calico--kube--controllers--5cb9df59f4--w67vm-eth0" May 17 00:24:07.582393 containerd[1952]: 2025-05-17 00:24:07.575 [INFO][6647] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c91d947efdf3366239c1f4ce936a88a408e261fb0b3db5fd726046d0446a7d71" Namespace="calico-system" Pod="calico-kube-controllers-5cb9df59f4-w67vm" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-calico--kube--controllers--5cb9df59f4--w67vm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--750554c5a6-k8s-calico--kube--controllers--5cb9df59f4--w67vm-eth0", GenerateName:"calico-kube-controllers-5cb9df59f4-", Namespace:"calico-system", SelfLink:"", UID:"7c71ecf2-a673-4248-97b2-14f7ed1d6b2c", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5cb9df59f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-750554c5a6", ContainerID:"c91d947efdf3366239c1f4ce936a88a408e261fb0b3db5fd726046d0446a7d71", Pod:"calico-kube-controllers-5cb9df59f4-w67vm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.62.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2f4fa8ff320", MAC:"86:37:bf:9d:4e:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:07.582393 containerd[1952]: 2025-05-17 00:24:07.580 [INFO][6647] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c91d947efdf3366239c1f4ce936a88a408e261fb0b3db5fd726046d0446a7d71" Namespace="calico-system" Pod="calico-kube-controllers-5cb9df59f4-w67vm" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-calico--kube--controllers--5cb9df59f4--w67vm-eth0" May 17 00:24:07.589773 containerd[1952]: time="2025-05-17T00:24:07.589530461Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:24:07.589832 containerd[1952]: time="2025-05-17T00:24:07.589774977Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:24:07.589832 containerd[1952]: time="2025-05-17T00:24:07.589787485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:07.589865 containerd[1952]: time="2025-05-17T00:24:07.589844147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:07.626282 containerd[1952]: time="2025-05-17T00:24:07.626260410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cb9df59f4-w67vm,Uid:7c71ecf2-a673-4248-97b2-14f7ed1d6b2c,Namespace:calico-system,Attempt:1,} returns sandbox id \"c91d947efdf3366239c1f4ce936a88a408e261fb0b3db5fd726046d0446a7d71\"" May 17 00:24:07.769589 systemd-networkd[1566]: cali60e3c7769ed: Gained IPv6LL May 17 00:24:07.820455 containerd[1952]: time="2025-05-17T00:24:07.820404528Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:07.820615 containerd[1952]: time="2025-05-17T00:24:07.820566486Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=47252431" May 17 00:24:07.820958 containerd[1952]: time="2025-05-17T00:24:07.820918908Z" level=info msg="ImageCreate event name:\"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:07.822087 containerd[1952]: time="2025-05-17T00:24:07.822043388Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:07.822700 containerd[1952]: time="2025-05-17T00:24:07.822651497Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 1.817063771s" May 17 00:24:07.822700 containerd[1952]: time="2025-05-17T00:24:07.822676931Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 17 00:24:07.823163 containerd[1952]: time="2025-05-17T00:24:07.823126415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:24:07.823599 containerd[1952]: time="2025-05-17T00:24:07.823584273Z" level=info msg="CreateContainer within sandbox \"0f416d5d36f5495f52c78930f8d0effd062bbffe0fb3ecd18dc2cfc1c94c87c8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:24:07.827182 containerd[1952]: time="2025-05-17T00:24:07.827136041Z" level=info msg="CreateContainer within sandbox \"0f416d5d36f5495f52c78930f8d0effd062bbffe0fb3ecd18dc2cfc1c94c87c8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5ac385b2ebaceb2f84849215ce8a20b2581b7321321034412a017374c5075dc1\"" May 17 00:24:07.827386 containerd[1952]: time="2025-05-17T00:24:07.827345906Z" level=info msg="StartContainer for \"5ac385b2ebaceb2f84849215ce8a20b2581b7321321034412a017374c5075dc1\"" May 17 00:24:07.878390 containerd[1952]: time="2025-05-17T00:24:07.878367175Z" level=info msg="StartContainer for \"5ac385b2ebaceb2f84849215ce8a20b2581b7321321034412a017374c5075dc1\" returns successfully" May 17 00:24:08.236159 containerd[1952]: time="2025-05-17T00:24:08.236089105Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:08.236271 containerd[1952]: time="2025-05-17T00:24:08.236251885Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 17 00:24:08.237743 containerd[1952]: time="2025-05-17T00:24:08.237700657Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 414.560288ms" May 17 00:24:08.237743 containerd[1952]: time="2025-05-17T00:24:08.237715456Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 17 00:24:08.238337 containerd[1952]: time="2025-05-17T00:24:08.238326724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 17 00:24:08.238889 containerd[1952]: time="2025-05-17T00:24:08.238857714Z" level=info msg="CreateContainer within sandbox \"5cf51296f56480ac31dfedda4504bf4c26a41b7324da11f5778c7abf1b25043c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:24:08.243121 containerd[1952]: time="2025-05-17T00:24:08.243078605Z" level=info msg="CreateContainer within sandbox \"5cf51296f56480ac31dfedda4504bf4c26a41b7324da11f5778c7abf1b25043c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ae603f03fd6c9f44b446a95897f0c042e5647cde8c518ffdee95ec94cfaf4f9c\"" May 17 00:24:08.243315 containerd[1952]: time="2025-05-17T00:24:08.243282977Z" level=info msg="StartContainer for \"ae603f03fd6c9f44b446a95897f0c042e5647cde8c518ffdee95ec94cfaf4f9c\"" May 17 00:24:08.293075 containerd[1952]: time="2025-05-17T00:24:08.293055948Z" level=info msg="StartContainer for \"ae603f03fd6c9f44b446a95897f0c042e5647cde8c518ffdee95ec94cfaf4f9c\" returns successfully" May 17 00:24:08.547144 kubelet[3260]: I0517 00:24:08.547042 3260 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-69b4545c59-2hqxd" podStartSLOduration=25.96239756 podStartE2EDuration="28.547024684s" podCreationTimestamp="2025-05-17 00:23:40 +0000 UTC" firstStartedPulling="2025-05-17 00:24:05.653534279 +0000 UTC m=+38.322506811" lastFinishedPulling="2025-05-17 00:24:08.238161397 +0000 UTC m=+40.907133935" observedRunningTime="2025-05-17 00:24:08.546747118 +0000 UTC m=+41.215719651" watchObservedRunningTime="2025-05-17 00:24:08.547024684 +0000 UTC m=+41.215997216" May 17 00:24:08.551972 kubelet[3260]: I0517 00:24:08.551930 3260 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-69b4545c59-j694m" podStartSLOduration=25.487426829 podStartE2EDuration="28.551916658s" podCreationTimestamp="2025-05-17 00:23:40 +0000 UTC" firstStartedPulling="2025-05-17 00:24:04.75856551 +0000 UTC m=+37.427538042" lastFinishedPulling="2025-05-17 00:24:07.823055338 +0000 UTC m=+40.492027871" observedRunningTime="2025-05-17 00:24:08.551601366 +0000 UTC m=+41.220573900" watchObservedRunningTime="2025-05-17 00:24:08.551916658 +0000 UTC m=+41.220889188" May 17 00:24:09.550668 kubelet[3260]: I0517 00:24:09.550647 3260 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:24:09.562615 systemd-networkd[1566]: cali2f4fa8ff320: Gained IPv6LL May 17 00:24:09.709958 containerd[1952]: time="2025-05-17T00:24:09.709907767Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:09.710186 containerd[1952]: time="2025-05-17T00:24:09.710110786Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=14705639" May 17 00:24:09.710558 containerd[1952]: time="2025-05-17T00:24:09.710509641Z" level=info msg="ImageCreate event name:\"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:09.711452 containerd[1952]: time="2025-05-17T00:24:09.711409787Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:09.711890 containerd[1952]: time="2025-05-17T00:24:09.711848072Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"16198294\" in 1.473506638s" May 17 00:24:09.711890 containerd[1952]: time="2025-05-17T00:24:09.711864487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 17 00:24:09.712439 containerd[1952]: time="2025-05-17T00:24:09.712428423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:24:09.712930 containerd[1952]: time="2025-05-17T00:24:09.712916084Z" level=info msg="CreateContainer within sandbox \"0438b3068f0e8e2a9c731a0c39118a21404578f395c9dd424bd6c367b1d06de6\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 17 00:24:09.718345 containerd[1952]: time="2025-05-17T00:24:09.718303408Z" level=info msg="CreateContainer within sandbox \"0438b3068f0e8e2a9c731a0c39118a21404578f395c9dd424bd6c367b1d06de6\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"144ddac0f0eab7fb3e709a6e1f424603d78d956eb4f2a11ac178ed9b1c23962d\"" May 17 00:24:09.718581 containerd[1952]: time="2025-05-17T00:24:09.718543098Z" level=info msg="StartContainer for \"144ddac0f0eab7fb3e709a6e1f424603d78d956eb4f2a11ac178ed9b1c23962d\"" May 17 00:24:09.748055 containerd[1952]: time="2025-05-17T00:24:09.748004644Z" level=info msg="StartContainer for \"144ddac0f0eab7fb3e709a6e1f424603d78d956eb4f2a11ac178ed9b1c23962d\" returns successfully" May 17 00:24:10.022869 containerd[1952]: time="2025-05-17T00:24:10.022718233Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:24:10.023815 containerd[1952]: time="2025-05-17T00:24:10.023738248Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:24:10.023853 containerd[1952]: time="2025-05-17T00:24:10.023814186Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:24:10.023955 kubelet[3260]: E0517 00:24:10.023904 3260 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:24:10.023955 kubelet[3260]: E0517 00:24:10.023939 3260 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:24:10.024360 kubelet[3260]: E0517 00:24:10.024222 3260 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9h4nt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-jr8s4_calico-system(307fc931-df0d-46ad-a0f5-9c642c960ef0): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:24:10.024453 containerd[1952]: time="2025-05-17T00:24:10.024284495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 17 00:24:10.025484 kubelet[3260]: E0517 00:24:10.025469 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-jr8s4" podUID="307fc931-df0d-46ad-a0f5-9c642c960ef0" May 17 00:24:10.146174 kubelet[3260]: I0517 00:24:10.146155 3260 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:24:10.456514 kubelet[3260]: I0517 00:24:10.456461 3260 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 17 00:24:10.456514 kubelet[3260]: I0517 00:24:10.456482 3260 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 17 00:24:10.483569 kernel: bpftool[7024]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 17 00:24:10.552995 kubelet[3260]: E0517 00:24:10.552974 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-jr8s4" podUID="307fc931-df0d-46ad-a0f5-9c642c960ef0" May 17 00:24:10.564796 kubelet[3260]: I0517 00:24:10.564757 3260 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-hkm4q" podStartSLOduration=22.425375588 podStartE2EDuration="27.564743239s" podCreationTimestamp="2025-05-17 00:23:43 +0000 UTC" firstStartedPulling="2025-05-17 00:24:04.572968032 +0000 UTC m=+37.241940573" lastFinishedPulling="2025-05-17 00:24:09.712335691 +0000 UTC m=+42.381308224" observedRunningTime="2025-05-17 00:24:10.564319323 +0000 UTC m=+43.233291856" watchObservedRunningTime="2025-05-17 00:24:10.564743239 +0000 UTC m=+43.233715775" May 17 00:24:10.642345 systemd-networkd[1566]: vxlan.calico: Link UP May 17 00:24:10.642352 systemd-networkd[1566]: vxlan.calico: Gained carrier May 17 00:24:11.802579 systemd-networkd[1566]: vxlan.calico: Gained IPv6LL May 17 00:24:11.916875 containerd[1952]: time="2025-05-17T00:24:11.916822905Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:11.917098 containerd[1952]: time="2025-05-17T00:24:11.917048669Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=51178512" May 17 00:24:11.917432 containerd[1952]: time="2025-05-17T00:24:11.917392451Z" level=info msg="ImageCreate event name:\"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:11.918377 containerd[1952]: time="2025-05-17T00:24:11.918336109Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:11.918795 containerd[1952]: time="2025-05-17T00:24:11.918752530Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"52671183\" in 1.894422308s" May 17 00:24:11.918795 containerd[1952]: time="2025-05-17T00:24:11.918769200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\"" May 17 00:24:11.922040 containerd[1952]: time="2025-05-17T00:24:11.921974892Z" level=info msg="CreateContainer within sandbox \"c91d947efdf3366239c1f4ce936a88a408e261fb0b3db5fd726046d0446a7d71\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 17 00:24:11.925714 containerd[1952]: time="2025-05-17T00:24:11.925672489Z" level=info msg="CreateContainer within sandbox \"c91d947efdf3366239c1f4ce936a88a408e261fb0b3db5fd726046d0446a7d71\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b1a778442e1613f6b7857bbca7b593db0775da7f08c3c8fa55852def71c3561f\"" May 17 00:24:11.925940 containerd[1952]: time="2025-05-17T00:24:11.925927708Z" level=info msg="StartContainer for \"b1a778442e1613f6b7857bbca7b593db0775da7f08c3c8fa55852def71c3561f\"" May 17 00:24:11.974626 containerd[1952]: time="2025-05-17T00:24:11.974604904Z" level=info msg="StartContainer for \"b1a778442e1613f6b7857bbca7b593db0775da7f08c3c8fa55852def71c3561f\" returns successfully" May 17 00:24:12.584044 kubelet[3260]: I0517 00:24:12.583886 3260 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5cb9df59f4-w67vm" podStartSLOduration=25.291534798 podStartE2EDuration="29.583840525s" podCreationTimestamp="2025-05-17 00:23:43 +0000 UTC" firstStartedPulling="2025-05-17 00:24:07.626841048 +0000 UTC m=+40.295813584" lastFinishedPulling="2025-05-17 00:24:11.919146778 +0000 UTC m=+44.588119311" observedRunningTime="2025-05-17 00:24:12.582818522 +0000 UTC m=+45.251791148" watchObservedRunningTime="2025-05-17 00:24:12.583840525 +0000 UTC m=+45.252813106" May 17 00:24:13.407191 containerd[1952]: time="2025-05-17T00:24:13.407102878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:24:13.724933 containerd[1952]: time="2025-05-17T00:24:13.724832373Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:24:13.725682 containerd[1952]: time="2025-05-17T00:24:13.725644001Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:24:13.725770 containerd[1952]: time="2025-05-17T00:24:13.725711126Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:24:13.725930 kubelet[3260]: E0517 00:24:13.725850 3260 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:24:13.725930 kubelet[3260]: E0517 00:24:13.725897 3260 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:24:13.726381 kubelet[3260]: E0517 00:24:13.726008 3260 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:89e605ba6a4a488d920453165d66bb98,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5sts8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-dfcb87d55-nzxqv_calico-system(d4055bd2-75b3-4d87-b331-17976496ef74): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:24:13.727967 containerd[1952]: time="2025-05-17T00:24:13.727934652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:24:14.046456 containerd[1952]: time="2025-05-17T00:24:14.046182251Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:24:14.047177 containerd[1952]: time="2025-05-17T00:24:14.047066284Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:24:14.047177 containerd[1952]: time="2025-05-17T00:24:14.047137502Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:24:14.047291 kubelet[3260]: E0517 00:24:14.047237 3260 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:24:14.047291 kubelet[3260]: E0517 00:24:14.047270 3260 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:24:14.047382 kubelet[3260]: E0517 00:24:14.047334 3260 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5sts8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-dfcb87d55-nzxqv_calico-system(d4055bd2-75b3-4d87-b331-17976496ef74): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:24:14.048555 kubelet[3260]: E0517 00:24:14.048490 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-dfcb87d55-nzxqv" podUID="d4055bd2-75b3-4d87-b331-17976496ef74" May 17 00:24:22.406866 containerd[1952]: time="2025-05-17T00:24:22.406655430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:24:22.715313 containerd[1952]: time="2025-05-17T00:24:22.715189675Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:24:22.715951 containerd[1952]: time="2025-05-17T00:24:22.715909938Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:24:22.716020 containerd[1952]: time="2025-05-17T00:24:22.715942497Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:24:22.716095 kubelet[3260]: E0517 00:24:22.716043 3260 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:24:22.716095 kubelet[3260]: E0517 00:24:22.716074 3260 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:24:22.716324 kubelet[3260]: E0517 00:24:22.716140 3260 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9h4nt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-jr8s4_calico-system(307fc931-df0d-46ad-a0f5-9c642c960ef0): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:24:22.717436 kubelet[3260]: E0517 00:24:22.717393 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-jr8s4" podUID="307fc931-df0d-46ad-a0f5-9c642c960ef0" May 17 00:24:27.398677 containerd[1952]: time="2025-05-17T00:24:27.398581477Z" level=info msg="StopPodSandbox for \"6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35\"" May 17 00:24:27.491043 containerd[1952]: 2025-05-17 00:24:27.457 [WARNING][7344] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--750554c5a6-k8s-calico--kube--controllers--5cb9df59f4--w67vm-eth0", GenerateName:"calico-kube-controllers-5cb9df59f4-", Namespace:"calico-system", SelfLink:"", UID:"7c71ecf2-a673-4248-97b2-14f7ed1d6b2c", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5cb9df59f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-750554c5a6", ContainerID:"c91d947efdf3366239c1f4ce936a88a408e261fb0b3db5fd726046d0446a7d71", Pod:"calico-kube-controllers-5cb9df59f4-w67vm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.62.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2f4fa8ff320", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:27.491043 containerd[1952]: 2025-05-17 00:24:27.458 [INFO][7344] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35" May 17 00:24:27.491043 containerd[1952]: 2025-05-17 00:24:27.458 [INFO][7344] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35" iface="eth0" netns="" May 17 00:24:27.491043 containerd[1952]: 2025-05-17 00:24:27.465 [INFO][7344] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35" May 17 00:24:27.491043 containerd[1952]: 2025-05-17 00:24:27.465 [INFO][7344] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35" May 17 00:24:27.491043 containerd[1952]: 2025-05-17 00:24:27.481 [INFO][7362] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35" HandleID="k8s-pod-network.6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35" Workload="ci--4081.3.3--n--750554c5a6-k8s-calico--kube--controllers--5cb9df59f4--w67vm-eth0" May 17 00:24:27.491043 containerd[1952]: 2025-05-17 00:24:27.481 [INFO][7362] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:27.491043 containerd[1952]: 2025-05-17 00:24:27.481 [INFO][7362] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:27.491043 containerd[1952]: 2025-05-17 00:24:27.487 [WARNING][7362] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35" HandleID="k8s-pod-network.6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35" Workload="ci--4081.3.3--n--750554c5a6-k8s-calico--kube--controllers--5cb9df59f4--w67vm-eth0" May 17 00:24:27.491043 containerd[1952]: 2025-05-17 00:24:27.487 [INFO][7362] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35" HandleID="k8s-pod-network.6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35" Workload="ci--4081.3.3--n--750554c5a6-k8s-calico--kube--controllers--5cb9df59f4--w67vm-eth0" May 17 00:24:27.491043 containerd[1952]: 2025-05-17 00:24:27.488 [INFO][7362] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:27.491043 containerd[1952]: 2025-05-17 00:24:27.489 [INFO][7344] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35" May 17 00:24:27.491596 containerd[1952]: time="2025-05-17T00:24:27.491079511Z" level=info msg="TearDown network for sandbox \"6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35\" successfully" May 17 00:24:27.491596 containerd[1952]: time="2025-05-17T00:24:27.491106408Z" level=info msg="StopPodSandbox for \"6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35\" returns successfully" May 17 00:24:27.491596 containerd[1952]: time="2025-05-17T00:24:27.491563893Z" level=info msg="RemovePodSandbox for \"6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35\"" May 17 00:24:27.491596 containerd[1952]: time="2025-05-17T00:24:27.491588358Z" level=info msg="Forcibly stopping sandbox \"6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35\"" May 17 00:24:27.541775 containerd[1952]: 2025-05-17 00:24:27.517 [WARNING][7389] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--750554c5a6-k8s-calico--kube--controllers--5cb9df59f4--w67vm-eth0", GenerateName:"calico-kube-controllers-5cb9df59f4-", Namespace:"calico-system", SelfLink:"", UID:"7c71ecf2-a673-4248-97b2-14f7ed1d6b2c", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5cb9df59f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-750554c5a6", ContainerID:"c91d947efdf3366239c1f4ce936a88a408e261fb0b3db5fd726046d0446a7d71", Pod:"calico-kube-controllers-5cb9df59f4-w67vm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.62.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2f4fa8ff320", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:27.541775 containerd[1952]: 2025-05-17 00:24:27.517 [INFO][7389] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35" May 17 00:24:27.541775 containerd[1952]: 2025-05-17 00:24:27.517 [INFO][7389] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35" iface="eth0" netns="" May 17 00:24:27.541775 containerd[1952]: 2025-05-17 00:24:27.517 [INFO][7389] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35" May 17 00:24:27.541775 containerd[1952]: 2025-05-17 00:24:27.517 [INFO][7389] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35" May 17 00:24:27.541775 containerd[1952]: 2025-05-17 00:24:27.532 [INFO][7405] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35" HandleID="k8s-pod-network.6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35" Workload="ci--4081.3.3--n--750554c5a6-k8s-calico--kube--controllers--5cb9df59f4--w67vm-eth0" May 17 00:24:27.541775 containerd[1952]: 2025-05-17 00:24:27.532 [INFO][7405] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:27.541775 containerd[1952]: 2025-05-17 00:24:27.532 [INFO][7405] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:27.541775 containerd[1952]: 2025-05-17 00:24:27.538 [WARNING][7405] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35" HandleID="k8s-pod-network.6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35" Workload="ci--4081.3.3--n--750554c5a6-k8s-calico--kube--controllers--5cb9df59f4--w67vm-eth0" May 17 00:24:27.541775 containerd[1952]: 2025-05-17 00:24:27.538 [INFO][7405] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35" HandleID="k8s-pod-network.6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35" Workload="ci--4081.3.3--n--750554c5a6-k8s-calico--kube--controllers--5cb9df59f4--w67vm-eth0" May 17 00:24:27.541775 containerd[1952]: 2025-05-17 00:24:27.539 [INFO][7405] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:27.541775 containerd[1952]: 2025-05-17 00:24:27.540 [INFO][7389] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35" May 17 00:24:27.542243 containerd[1952]: time="2025-05-17T00:24:27.541807414Z" level=info msg="TearDown network for sandbox \"6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35\" successfully" May 17 00:24:27.543968 containerd[1952]: time="2025-05-17T00:24:27.543954211Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:24:27.544008 containerd[1952]: time="2025-05-17T00:24:27.543984525Z" level=info msg="RemovePodSandbox \"6e1a011cefd56c248480afe2f10bbff0802bba2160e96c162518302e3a1b8b35\" returns successfully" May 17 00:24:27.544291 containerd[1952]: time="2025-05-17T00:24:27.544279379Z" level=info msg="StopPodSandbox for \"0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed\"" May 17 00:24:27.577794 containerd[1952]: 2025-05-17 00:24:27.561 [WARNING][7430] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--750554c5a6-k8s-csi--node--driver--hkm4q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"403f708a-0226-4cfa-98fa-55326c364f55", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-750554c5a6", ContainerID:"0438b3068f0e8e2a9c731a0c39118a21404578f395c9dd424bd6c367b1d06de6", Pod:"csi-node-driver-hkm4q", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.62.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4a95f1c1ffc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:27.577794 containerd[1952]: 2025-05-17 00:24:27.561 [INFO][7430] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed" May 17 00:24:27.577794 containerd[1952]: 2025-05-17 00:24:27.561 [INFO][7430] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed" iface="eth0" netns="" May 17 00:24:27.577794 containerd[1952]: 2025-05-17 00:24:27.561 [INFO][7430] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed" May 17 00:24:27.577794 containerd[1952]: 2025-05-17 00:24:27.561 [INFO][7430] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed" May 17 00:24:27.577794 containerd[1952]: 2025-05-17 00:24:27.571 [INFO][7447] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed" HandleID="k8s-pod-network.0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed" Workload="ci--4081.3.3--n--750554c5a6-k8s-csi--node--driver--hkm4q-eth0" May 17 00:24:27.577794 containerd[1952]: 2025-05-17 00:24:27.571 [INFO][7447] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:27.577794 containerd[1952]: 2025-05-17 00:24:27.571 [INFO][7447] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:27.577794 containerd[1952]: 2025-05-17 00:24:27.575 [WARNING][7447] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed" HandleID="k8s-pod-network.0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed" Workload="ci--4081.3.3--n--750554c5a6-k8s-csi--node--driver--hkm4q-eth0" May 17 00:24:27.577794 containerd[1952]: 2025-05-17 00:24:27.575 [INFO][7447] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed" HandleID="k8s-pod-network.0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed" Workload="ci--4081.3.3--n--750554c5a6-k8s-csi--node--driver--hkm4q-eth0" May 17 00:24:27.577794 containerd[1952]: 2025-05-17 00:24:27.576 [INFO][7447] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:27.577794 containerd[1952]: 2025-05-17 00:24:27.577 [INFO][7430] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed" May 17 00:24:27.577794 containerd[1952]: time="2025-05-17T00:24:27.577775850Z" level=info msg="TearDown network for sandbox \"0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed\" successfully" May 17 00:24:27.577794 containerd[1952]: time="2025-05-17T00:24:27.577790926Z" level=info msg="StopPodSandbox for \"0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed\" returns successfully" May 17 00:24:27.578134 containerd[1952]: time="2025-05-17T00:24:27.578001173Z" level=info msg="RemovePodSandbox for \"0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed\"" May 17 00:24:27.578134 containerd[1952]: time="2025-05-17T00:24:27.578021225Z" level=info msg="Forcibly stopping sandbox \"0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed\"" May 17 00:24:27.611157 containerd[1952]: 2025-05-17 00:24:27.595 [WARNING][7475] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--750554c5a6-k8s-csi--node--driver--hkm4q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"403f708a-0226-4cfa-98fa-55326c364f55", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-750554c5a6", ContainerID:"0438b3068f0e8e2a9c731a0c39118a21404578f395c9dd424bd6c367b1d06de6", Pod:"csi-node-driver-hkm4q", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.62.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4a95f1c1ffc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:27.611157 containerd[1952]: 2025-05-17 00:24:27.595 [INFO][7475] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed" May 17 00:24:27.611157 containerd[1952]: 2025-05-17 00:24:27.595 [INFO][7475] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed" iface="eth0" netns="" May 17 00:24:27.611157 containerd[1952]: 2025-05-17 00:24:27.595 [INFO][7475] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed" May 17 00:24:27.611157 containerd[1952]: 2025-05-17 00:24:27.595 [INFO][7475] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed" May 17 00:24:27.611157 containerd[1952]: 2025-05-17 00:24:27.605 [INFO][7492] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed" HandleID="k8s-pod-network.0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed" Workload="ci--4081.3.3--n--750554c5a6-k8s-csi--node--driver--hkm4q-eth0" May 17 00:24:27.611157 containerd[1952]: 2025-05-17 00:24:27.605 [INFO][7492] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:27.611157 containerd[1952]: 2025-05-17 00:24:27.605 [INFO][7492] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:27.611157 containerd[1952]: 2025-05-17 00:24:27.608 [WARNING][7492] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed" HandleID="k8s-pod-network.0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed" Workload="ci--4081.3.3--n--750554c5a6-k8s-csi--node--driver--hkm4q-eth0" May 17 00:24:27.611157 containerd[1952]: 2025-05-17 00:24:27.608 [INFO][7492] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed" HandleID="k8s-pod-network.0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed" Workload="ci--4081.3.3--n--750554c5a6-k8s-csi--node--driver--hkm4q-eth0" May 17 00:24:27.611157 containerd[1952]: 2025-05-17 00:24:27.609 [INFO][7492] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:27.611157 containerd[1952]: 2025-05-17 00:24:27.610 [INFO][7475] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed" May 17 00:24:27.611157 containerd[1952]: time="2025-05-17T00:24:27.611146784Z" level=info msg="TearDown network for sandbox \"0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed\" successfully" May 17 00:24:27.612524 containerd[1952]: time="2025-05-17T00:24:27.612481746Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:24:27.612524 containerd[1952]: time="2025-05-17T00:24:27.612514620Z" level=info msg="RemovePodSandbox \"0de92b2626a3850078c9fe597059004a60e5c65d91c1e223c60be696558238ed\" returns successfully" May 17 00:24:27.612778 containerd[1952]: time="2025-05-17T00:24:27.612746742Z" level=info msg="StopPodSandbox for \"095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae\"" May 17 00:24:27.647261 containerd[1952]: 2025-05-17 00:24:27.629 [WARNING][7520] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--j694m-eth0", GenerateName:"calico-apiserver-69b4545c59-", Namespace:"calico-apiserver", SelfLink:"", UID:"2bc1c7fb-90f8-4764-96f9-e3c9a5e522e6", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69b4545c59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-750554c5a6", ContainerID:"0f416d5d36f5495f52c78930f8d0effd062bbffe0fb3ecd18dc2cfc1c94c87c8", Pod:"calico-apiserver-69b4545c59-j694m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4fe2dba4a54", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:27.647261 containerd[1952]: 2025-05-17 00:24:27.630 [INFO][7520] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae" May 17 00:24:27.647261 containerd[1952]: 2025-05-17 00:24:27.630 [INFO][7520] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae" iface="eth0" netns="" May 17 00:24:27.647261 containerd[1952]: 2025-05-17 00:24:27.630 [INFO][7520] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae" May 17 00:24:27.647261 containerd[1952]: 2025-05-17 00:24:27.630 [INFO][7520] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae" May 17 00:24:27.647261 containerd[1952]: 2025-05-17 00:24:27.640 [INFO][7538] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae" HandleID="k8s-pod-network.095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae" Workload="ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--j694m-eth0" May 17 00:24:27.647261 containerd[1952]: 2025-05-17 00:24:27.640 [INFO][7538] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:27.647261 containerd[1952]: 2025-05-17 00:24:27.640 [INFO][7538] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:27.647261 containerd[1952]: 2025-05-17 00:24:27.644 [WARNING][7538] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae" HandleID="k8s-pod-network.095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae" Workload="ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--j694m-eth0" May 17 00:24:27.647261 containerd[1952]: 2025-05-17 00:24:27.644 [INFO][7538] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae" HandleID="k8s-pod-network.095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae" Workload="ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--j694m-eth0" May 17 00:24:27.647261 containerd[1952]: 2025-05-17 00:24:27.645 [INFO][7538] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:27.647261 containerd[1952]: 2025-05-17 00:24:27.646 [INFO][7520] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae" May 17 00:24:27.647563 containerd[1952]: time="2025-05-17T00:24:27.647269768Z" level=info msg="TearDown network for sandbox \"095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae\" successfully" May 17 00:24:27.647563 containerd[1952]: time="2025-05-17T00:24:27.647288865Z" level=info msg="StopPodSandbox for \"095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae\" returns successfully" May 17 00:24:27.647563 containerd[1952]: time="2025-05-17T00:24:27.647527334Z" level=info msg="RemovePodSandbox for \"095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae\"" May 17 00:24:27.647563 containerd[1952]: time="2025-05-17T00:24:27.647542306Z" level=info msg="Forcibly stopping sandbox \"095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae\"" May 17 00:24:27.683418 containerd[1952]: 2025-05-17 00:24:27.665 [WARNING][7565] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--j694m-eth0", GenerateName:"calico-apiserver-69b4545c59-", Namespace:"calico-apiserver", SelfLink:"", UID:"2bc1c7fb-90f8-4764-96f9-e3c9a5e522e6", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69b4545c59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-750554c5a6", ContainerID:"0f416d5d36f5495f52c78930f8d0effd062bbffe0fb3ecd18dc2cfc1c94c87c8", Pod:"calico-apiserver-69b4545c59-j694m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4fe2dba4a54", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:27.683418 containerd[1952]: 2025-05-17 00:24:27.666 [INFO][7565] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae" May 17 00:24:27.683418 containerd[1952]: 2025-05-17 00:24:27.666 [INFO][7565] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae" iface="eth0" netns="" May 17 00:24:27.683418 containerd[1952]: 2025-05-17 00:24:27.666 [INFO][7565] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae" May 17 00:24:27.683418 containerd[1952]: 2025-05-17 00:24:27.666 [INFO][7565] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae" May 17 00:24:27.683418 containerd[1952]: 2025-05-17 00:24:27.676 [INFO][7580] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae" HandleID="k8s-pod-network.095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae" Workload="ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--j694m-eth0" May 17 00:24:27.683418 containerd[1952]: 2025-05-17 00:24:27.676 [INFO][7580] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:27.683418 containerd[1952]: 2025-05-17 00:24:27.676 [INFO][7580] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:27.683418 containerd[1952]: 2025-05-17 00:24:27.680 [WARNING][7580] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae" HandleID="k8s-pod-network.095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae" Workload="ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--j694m-eth0" May 17 00:24:27.683418 containerd[1952]: 2025-05-17 00:24:27.680 [INFO][7580] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae" HandleID="k8s-pod-network.095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae" Workload="ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--j694m-eth0" May 17 00:24:27.683418 containerd[1952]: 2025-05-17 00:24:27.681 [INFO][7580] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:27.683418 containerd[1952]: 2025-05-17 00:24:27.682 [INFO][7565] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae" May 17 00:24:27.683418 containerd[1952]: time="2025-05-17T00:24:27.683406835Z" level=info msg="TearDown network for sandbox \"095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae\" successfully" May 17 00:24:27.775839 containerd[1952]: time="2025-05-17T00:24:27.775774282Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:24:27.775839 containerd[1952]: time="2025-05-17T00:24:27.775834034Z" level=info msg="RemovePodSandbox \"095d64100c62547ba73671dcb83baff0db35543f5330152c6d7f9a6e4ce0cfae\" returns successfully" May 17 00:24:27.776297 containerd[1952]: time="2025-05-17T00:24:27.776242151Z" level=info msg="StopPodSandbox for \"ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651\"" May 17 00:24:27.819414 containerd[1952]: 2025-05-17 00:24:27.801 [WARNING][7604] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--2hqxd-eth0", GenerateName:"calico-apiserver-69b4545c59-", Namespace:"calico-apiserver", SelfLink:"", UID:"6d8a08e2-6fb2-41c5-aa91-98552c67cdeb", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69b4545c59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-750554c5a6", ContainerID:"5cf51296f56480ac31dfedda4504bf4c26a41b7324da11f5778c7abf1b25043c", Pod:"calico-apiserver-69b4545c59-2hqxd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif0391eb158c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:27.819414 containerd[1952]: 2025-05-17 00:24:27.801 [INFO][7604] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651" May 17 00:24:27.819414 containerd[1952]: 2025-05-17 00:24:27.801 [INFO][7604] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651" iface="eth0" netns="" May 17 00:24:27.819414 containerd[1952]: 2025-05-17 00:24:27.801 [INFO][7604] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651" May 17 00:24:27.819414 containerd[1952]: 2025-05-17 00:24:27.801 [INFO][7604] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651" May 17 00:24:27.819414 containerd[1952]: 2025-05-17 00:24:27.812 [INFO][7620] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651" HandleID="k8s-pod-network.ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651" Workload="ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--2hqxd-eth0" May 17 00:24:27.819414 containerd[1952]: 2025-05-17 00:24:27.812 [INFO][7620] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:27.819414 containerd[1952]: 2025-05-17 00:24:27.812 [INFO][7620] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:27.819414 containerd[1952]: 2025-05-17 00:24:27.816 [WARNING][7620] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651" HandleID="k8s-pod-network.ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651" Workload="ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--2hqxd-eth0" May 17 00:24:27.819414 containerd[1952]: 2025-05-17 00:24:27.816 [INFO][7620] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651" HandleID="k8s-pod-network.ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651" Workload="ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--2hqxd-eth0" May 17 00:24:27.819414 containerd[1952]: 2025-05-17 00:24:27.818 [INFO][7620] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:27.819414 containerd[1952]: 2025-05-17 00:24:27.818 [INFO][7604] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651" May 17 00:24:27.819735 containerd[1952]: time="2025-05-17T00:24:27.819430587Z" level=info msg="TearDown network for sandbox \"ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651\" successfully" May 17 00:24:27.819735 containerd[1952]: time="2025-05-17T00:24:27.819444456Z" level=info msg="StopPodSandbox for \"ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651\" returns successfully" May 17 00:24:27.819735 containerd[1952]: time="2025-05-17T00:24:27.819692805Z" level=info msg="RemovePodSandbox for \"ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651\"" May 17 00:24:27.819735 containerd[1952]: time="2025-05-17T00:24:27.819709949Z" level=info msg="Forcibly stopping sandbox \"ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651\"" May 17 00:24:27.851704 containerd[1952]: 2025-05-17 00:24:27.835 [WARNING][7642] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--2hqxd-eth0", GenerateName:"calico-apiserver-69b4545c59-", Namespace:"calico-apiserver", SelfLink:"", UID:"6d8a08e2-6fb2-41c5-aa91-98552c67cdeb", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69b4545c59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-750554c5a6", ContainerID:"5cf51296f56480ac31dfedda4504bf4c26a41b7324da11f5778c7abf1b25043c", Pod:"calico-apiserver-69b4545c59-2hqxd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif0391eb158c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:27.851704 containerd[1952]: 2025-05-17 00:24:27.835 [INFO][7642] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651" May 17 00:24:27.851704 containerd[1952]: 2025-05-17 00:24:27.835 [INFO][7642] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651" iface="eth0" netns="" May 17 00:24:27.851704 containerd[1952]: 2025-05-17 00:24:27.835 [INFO][7642] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651" May 17 00:24:27.851704 containerd[1952]: 2025-05-17 00:24:27.835 [INFO][7642] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651" May 17 00:24:27.851704 containerd[1952]: 2025-05-17 00:24:27.845 [INFO][7654] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651" HandleID="k8s-pod-network.ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651" Workload="ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--2hqxd-eth0" May 17 00:24:27.851704 containerd[1952]: 2025-05-17 00:24:27.845 [INFO][7654] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:27.851704 containerd[1952]: 2025-05-17 00:24:27.845 [INFO][7654] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:27.851704 containerd[1952]: 2025-05-17 00:24:27.849 [WARNING][7654] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651" HandleID="k8s-pod-network.ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651" Workload="ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--2hqxd-eth0" May 17 00:24:27.851704 containerd[1952]: 2025-05-17 00:24:27.849 [INFO][7654] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651" HandleID="k8s-pod-network.ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651" Workload="ci--4081.3.3--n--750554c5a6-k8s-calico--apiserver--69b4545c59--2hqxd-eth0" May 17 00:24:27.851704 containerd[1952]: 2025-05-17 00:24:27.850 [INFO][7654] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:27.851704 containerd[1952]: 2025-05-17 00:24:27.851 [INFO][7642] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651" May 17 00:24:27.852015 containerd[1952]: time="2025-05-17T00:24:27.851728751Z" level=info msg="TearDown network for sandbox \"ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651\" successfully" May 17 00:24:27.913905 containerd[1952]: time="2025-05-17T00:24:27.913874564Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:24:27.914014 containerd[1952]: time="2025-05-17T00:24:27.913928657Z" level=info msg="RemovePodSandbox \"ae46213ee42a8183e447c26672cc1f67d0738f5b8b9c7ddaf0a905a9192b8651\" returns successfully" May 17 00:24:27.914264 containerd[1952]: time="2025-05-17T00:24:27.914250662Z" level=info msg="StopPodSandbox for \"35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85\"" May 17 00:24:27.962213 containerd[1952]: 2025-05-17 00:24:27.934 [WARNING][7680] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--750554c5a6-k8s-goldmane--8f77d7b6c--jr8s4-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"307fc931-df0d-46ad-a0f5-9c642c960ef0", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-750554c5a6", ContainerID:"ec08f7541e7620580733f1a221a78caa35d8705f6ed984b978310eadfd694f21", Pod:"goldmane-8f77d7b6c-jr8s4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.62.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali60e3c7769ed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:27.962213 containerd[1952]: 2025-05-17 00:24:27.935 [INFO][7680] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85" May 17 00:24:27.962213 containerd[1952]: 2025-05-17 00:24:27.935 [INFO][7680] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85" iface="eth0" netns="" May 17 00:24:27.962213 containerd[1952]: 2025-05-17 00:24:27.935 [INFO][7680] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85" May 17 00:24:27.962213 containerd[1952]: 2025-05-17 00:24:27.935 [INFO][7680] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85" May 17 00:24:27.962213 containerd[1952]: 2025-05-17 00:24:27.952 [INFO][7703] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85" HandleID="k8s-pod-network.35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85" Workload="ci--4081.3.3--n--750554c5a6-k8s-goldmane--8f77d7b6c--jr8s4-eth0" May 17 00:24:27.962213 containerd[1952]: 2025-05-17 00:24:27.952 [INFO][7703] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:27.962213 containerd[1952]: 2025-05-17 00:24:27.953 [INFO][7703] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:27.962213 containerd[1952]: 2025-05-17 00:24:27.958 [WARNING][7703] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85" HandleID="k8s-pod-network.35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85" Workload="ci--4081.3.3--n--750554c5a6-k8s-goldmane--8f77d7b6c--jr8s4-eth0" May 17 00:24:27.962213 containerd[1952]: 2025-05-17 00:24:27.958 [INFO][7703] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85" HandleID="k8s-pod-network.35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85" Workload="ci--4081.3.3--n--750554c5a6-k8s-goldmane--8f77d7b6c--jr8s4-eth0" May 17 00:24:27.962213 containerd[1952]: 2025-05-17 00:24:27.960 [INFO][7703] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:27.962213 containerd[1952]: 2025-05-17 00:24:27.961 [INFO][7680] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85" May 17 00:24:27.962685 containerd[1952]: time="2025-05-17T00:24:27.962251835Z" level=info msg="TearDown network for sandbox \"35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85\" successfully" May 17 00:24:27.962685 containerd[1952]: time="2025-05-17T00:24:27.962273859Z" level=info msg="StopPodSandbox for \"35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85\" returns successfully" May 17 00:24:27.962685 containerd[1952]: time="2025-05-17T00:24:27.962628095Z" level=info msg="RemovePodSandbox for \"35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85\"" May 17 00:24:27.962685 containerd[1952]: time="2025-05-17T00:24:27.962652145Z" level=info msg="Forcibly stopping sandbox \"35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85\"" May 17 00:24:28.006641 containerd[1952]: 2025-05-17 00:24:27.986 [WARNING][7740] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--750554c5a6-k8s-goldmane--8f77d7b6c--jr8s4-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"307fc931-df0d-46ad-a0f5-9c642c960ef0", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-750554c5a6", ContainerID:"ec08f7541e7620580733f1a221a78caa35d8705f6ed984b978310eadfd694f21", Pod:"goldmane-8f77d7b6c-jr8s4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.62.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali60e3c7769ed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:28.006641 containerd[1952]: 2025-05-17 00:24:27.987 [INFO][7740] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85" May 17 00:24:28.006641 containerd[1952]: 2025-05-17 00:24:27.987 [INFO][7740] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85" iface="eth0" netns="" May 17 00:24:28.006641 containerd[1952]: 2025-05-17 00:24:27.987 [INFO][7740] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85" May 17 00:24:28.006641 containerd[1952]: 2025-05-17 00:24:27.987 [INFO][7740] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85" May 17 00:24:28.006641 containerd[1952]: 2025-05-17 00:24:27.998 [INFO][7762] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85" HandleID="k8s-pod-network.35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85" Workload="ci--4081.3.3--n--750554c5a6-k8s-goldmane--8f77d7b6c--jr8s4-eth0" May 17 00:24:28.006641 containerd[1952]: 2025-05-17 00:24:27.998 [INFO][7762] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:28.006641 containerd[1952]: 2025-05-17 00:24:27.998 [INFO][7762] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:28.006641 containerd[1952]: 2025-05-17 00:24:28.003 [WARNING][7762] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85" HandleID="k8s-pod-network.35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85" Workload="ci--4081.3.3--n--750554c5a6-k8s-goldmane--8f77d7b6c--jr8s4-eth0" May 17 00:24:28.006641 containerd[1952]: 2025-05-17 00:24:28.003 [INFO][7762] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85" HandleID="k8s-pod-network.35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85" Workload="ci--4081.3.3--n--750554c5a6-k8s-goldmane--8f77d7b6c--jr8s4-eth0" May 17 00:24:28.006641 containerd[1952]: 2025-05-17 00:24:28.004 [INFO][7762] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:28.006641 containerd[1952]: 2025-05-17 00:24:28.005 [INFO][7740] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85" May 17 00:24:28.006641 containerd[1952]: time="2025-05-17T00:24:28.006631419Z" level=info msg="TearDown network for sandbox \"35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85\" successfully" May 17 00:24:28.046682 containerd[1952]: time="2025-05-17T00:24:28.046654663Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:24:28.046770 containerd[1952]: time="2025-05-17T00:24:28.046701742Z" level=info msg="RemovePodSandbox \"35e18ca4ba8dac42dea82d1af300ac7d41f8cf2eb97346f87fcd2c6793cb6d85\" returns successfully" May 17 00:24:28.047013 containerd[1952]: time="2025-05-17T00:24:28.046982212Z" level=info msg="StopPodSandbox for \"fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174\"" May 17 00:24:28.082284 containerd[1952]: 2025-05-17 00:24:28.064 [WARNING][7787] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--kmxkn-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"5181617e-229c-477a-82dd-f49d74685250", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-750554c5a6", ContainerID:"6136615dc31c08bb5e80394fe8bb02f784ea770b6ab343da11d54ac1e5ed2e60", Pod:"coredns-7c65d6cfc9-kmxkn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali02dec70a0e4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:28.082284 containerd[1952]: 2025-05-17 00:24:28.064 [INFO][7787] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174" May 17 00:24:28.082284 containerd[1952]: 2025-05-17 00:24:28.064 [INFO][7787] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174" iface="eth0" netns="" May 17 00:24:28.082284 containerd[1952]: 2025-05-17 00:24:28.064 [INFO][7787] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174" May 17 00:24:28.082284 containerd[1952]: 2025-05-17 00:24:28.064 [INFO][7787] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174" May 17 00:24:28.082284 containerd[1952]: 2025-05-17 00:24:28.074 [INFO][7804] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174" HandleID="k8s-pod-network.fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174" Workload="ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--kmxkn-eth0" May 17 00:24:28.082284 containerd[1952]: 2025-05-17 00:24:28.074 [INFO][7804] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:28.082284 containerd[1952]: 2025-05-17 00:24:28.074 [INFO][7804] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:28.082284 containerd[1952]: 2025-05-17 00:24:28.079 [WARNING][7804] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174" HandleID="k8s-pod-network.fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174" Workload="ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--kmxkn-eth0" May 17 00:24:28.082284 containerd[1952]: 2025-05-17 00:24:28.079 [INFO][7804] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174" HandleID="k8s-pod-network.fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174" Workload="ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--kmxkn-eth0" May 17 00:24:28.082284 containerd[1952]: 2025-05-17 00:24:28.080 [INFO][7804] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:28.082284 containerd[1952]: 2025-05-17 00:24:28.081 [INFO][7787] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174" May 17 00:24:28.082620 containerd[1952]: time="2025-05-17T00:24:28.082290949Z" level=info msg="TearDown network for sandbox \"fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174\" successfully" May 17 00:24:28.082620 containerd[1952]: time="2025-05-17T00:24:28.082310781Z" level=info msg="StopPodSandbox for \"fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174\" returns successfully" May 17 00:24:28.082620 containerd[1952]: time="2025-05-17T00:24:28.082571405Z" level=info msg="RemovePodSandbox for \"fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174\"" May 17 00:24:28.082620 containerd[1952]: time="2025-05-17T00:24:28.082589451Z" level=info msg="Forcibly stopping sandbox \"fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174\"" May 17 00:24:28.116354 containerd[1952]: 2025-05-17 00:24:28.100 [WARNING][7828] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--kmxkn-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"5181617e-229c-477a-82dd-f49d74685250", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-750554c5a6", ContainerID:"6136615dc31c08bb5e80394fe8bb02f784ea770b6ab343da11d54ac1e5ed2e60", Pod:"coredns-7c65d6cfc9-kmxkn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali02dec70a0e4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:28.116354 containerd[1952]: 2025-05-17 00:24:28.100 [INFO][7828] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174" May 17 00:24:28.116354 containerd[1952]: 2025-05-17 00:24:28.100 [INFO][7828] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174" iface="eth0" netns="" May 17 00:24:28.116354 containerd[1952]: 2025-05-17 00:24:28.100 [INFO][7828] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174" May 17 00:24:28.116354 containerd[1952]: 2025-05-17 00:24:28.100 [INFO][7828] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174" May 17 00:24:28.116354 containerd[1952]: 2025-05-17 00:24:28.109 [INFO][7842] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174" HandleID="k8s-pod-network.fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174" Workload="ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--kmxkn-eth0" May 17 00:24:28.116354 containerd[1952]: 2025-05-17 00:24:28.110 [INFO][7842] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:28.116354 containerd[1952]: 2025-05-17 00:24:28.110 [INFO][7842] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:28.116354 containerd[1952]: 2025-05-17 00:24:28.113 [WARNING][7842] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174" HandleID="k8s-pod-network.fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174" Workload="ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--kmxkn-eth0" May 17 00:24:28.116354 containerd[1952]: 2025-05-17 00:24:28.113 [INFO][7842] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174" HandleID="k8s-pod-network.fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174" Workload="ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--kmxkn-eth0" May 17 00:24:28.116354 containerd[1952]: 2025-05-17 00:24:28.114 [INFO][7842] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:28.116354 containerd[1952]: 2025-05-17 00:24:28.115 [INFO][7828] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174" May 17 00:24:28.116652 containerd[1952]: time="2025-05-17T00:24:28.116359722Z" level=info msg="TearDown network for sandbox \"fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174\" successfully" May 17 00:24:28.140069 containerd[1952]: time="2025-05-17T00:24:28.140018686Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:24:28.140069 containerd[1952]: time="2025-05-17T00:24:28.140060339Z" level=info msg="RemovePodSandbox \"fac46b0a034bf86dd29803710b505376c21e6ef010670b8e3738aa0313b41174\" returns successfully" May 17 00:24:28.140318 containerd[1952]: time="2025-05-17T00:24:28.140306001Z" level=info msg="StopPodSandbox for \"215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8\"" May 17 00:24:28.172898 containerd[1952]: 2025-05-17 00:24:28.156 [WARNING][7869] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-whisker--54b6765ddc--b72d9-eth0" May 17 00:24:28.172898 containerd[1952]: 2025-05-17 00:24:28.156 [INFO][7869] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8" May 17 00:24:28.172898 containerd[1952]: 2025-05-17 00:24:28.156 [INFO][7869] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8" iface="eth0" netns="" May 17 00:24:28.172898 containerd[1952]: 2025-05-17 00:24:28.156 [INFO][7869] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8" May 17 00:24:28.172898 containerd[1952]: 2025-05-17 00:24:28.156 [INFO][7869] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8" May 17 00:24:28.172898 containerd[1952]: 2025-05-17 00:24:28.166 [INFO][7887] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8" HandleID="k8s-pod-network.215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8" Workload="ci--4081.3.3--n--750554c5a6-k8s-whisker--54b6765ddc--b72d9-eth0" May 17 00:24:28.172898 containerd[1952]: 2025-05-17 00:24:28.166 [INFO][7887] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:28.172898 containerd[1952]: 2025-05-17 00:24:28.166 [INFO][7887] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:28.172898 containerd[1952]: 2025-05-17 00:24:28.170 [WARNING][7887] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8" HandleID="k8s-pod-network.215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8" Workload="ci--4081.3.3--n--750554c5a6-k8s-whisker--54b6765ddc--b72d9-eth0" May 17 00:24:28.172898 containerd[1952]: 2025-05-17 00:24:28.170 [INFO][7887] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8" HandleID="k8s-pod-network.215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8" Workload="ci--4081.3.3--n--750554c5a6-k8s-whisker--54b6765ddc--b72d9-eth0" May 17 00:24:28.172898 containerd[1952]: 2025-05-17 00:24:28.171 [INFO][7887] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:28.172898 containerd[1952]: 2025-05-17 00:24:28.172 [INFO][7869] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8" May 17 00:24:28.173191 containerd[1952]: time="2025-05-17T00:24:28.172943074Z" level=info msg="TearDown network for sandbox \"215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8\" successfully" May 17 00:24:28.173191 containerd[1952]: time="2025-05-17T00:24:28.172965197Z" level=info msg="StopPodSandbox for \"215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8\" returns successfully" May 17 00:24:28.173237 containerd[1952]: time="2025-05-17T00:24:28.173220026Z" level=info msg="RemovePodSandbox for \"215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8\"" May 17 00:24:28.173255 containerd[1952]: time="2025-05-17T00:24:28.173237076Z" level=info msg="Forcibly stopping sandbox \"215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8\"" May 17 00:24:28.206976 containerd[1952]: 2025-05-17 00:24:28.190 [WARNING][7913] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8" WorkloadEndpoint="ci--4081.3.3--n--750554c5a6-k8s-whisker--54b6765ddc--b72d9-eth0" May 17 00:24:28.206976 containerd[1952]: 2025-05-17 00:24:28.190 [INFO][7913] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8" May 17 00:24:28.206976 containerd[1952]: 2025-05-17 00:24:28.190 [INFO][7913] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8" iface="eth0" netns="" May 17 00:24:28.206976 containerd[1952]: 2025-05-17 00:24:28.190 [INFO][7913] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8" May 17 00:24:28.206976 containerd[1952]: 2025-05-17 00:24:28.190 [INFO][7913] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8" May 17 00:24:28.206976 containerd[1952]: 2025-05-17 00:24:28.201 [INFO][7928] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8" HandleID="k8s-pod-network.215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8" Workload="ci--4081.3.3--n--750554c5a6-k8s-whisker--54b6765ddc--b72d9-eth0" May 17 00:24:28.206976 containerd[1952]: 2025-05-17 00:24:28.201 [INFO][7928] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:28.206976 containerd[1952]: 2025-05-17 00:24:28.201 [INFO][7928] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:28.206976 containerd[1952]: 2025-05-17 00:24:28.205 [WARNING][7928] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8" HandleID="k8s-pod-network.215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8" Workload="ci--4081.3.3--n--750554c5a6-k8s-whisker--54b6765ddc--b72d9-eth0" May 17 00:24:28.206976 containerd[1952]: 2025-05-17 00:24:28.205 [INFO][7928] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8" HandleID="k8s-pod-network.215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8" Workload="ci--4081.3.3--n--750554c5a6-k8s-whisker--54b6765ddc--b72d9-eth0" May 17 00:24:28.206976 containerd[1952]: 2025-05-17 00:24:28.205 [INFO][7928] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:28.206976 containerd[1952]: 2025-05-17 00:24:28.206 [INFO][7913] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8" May 17 00:24:28.207233 containerd[1952]: time="2025-05-17T00:24:28.207004971Z" level=info msg="TearDown network for sandbox \"215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8\" successfully" May 17 00:24:28.314288 containerd[1952]: time="2025-05-17T00:24:28.314226283Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:24:28.314288 containerd[1952]: time="2025-05-17T00:24:28.314268863Z" level=info msg="RemovePodSandbox \"215bd2e7e0d0a1076dc2f2cfe17fa5387b46924ed81832da9ca4746cb1e3dbb8\" returns successfully" May 17 00:24:28.314554 containerd[1952]: time="2025-05-17T00:24:28.314543471Z" level=info msg="StopPodSandbox for \"55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e\"" May 17 00:24:28.347698 containerd[1952]: 2025-05-17 00:24:28.331 [WARNING][7953] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--mm8pw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ed62d733-94ab-45e2-84ca-07ad82b2634c", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-750554c5a6", ContainerID:"a616171e2cc431d442e3b10efffbeb2eb086b9c4ddf0985607b63f7d4a9a2855", Pod:"coredns-7c65d6cfc9-mm8pw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali580bc0191ad", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:28.347698 containerd[1952]: 2025-05-17 00:24:28.332 [INFO][7953] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e" May 17 00:24:28.347698 containerd[1952]: 2025-05-17 00:24:28.332 [INFO][7953] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e" iface="eth0" netns="" May 17 00:24:28.347698 containerd[1952]: 2025-05-17 00:24:28.332 [INFO][7953] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e" May 17 00:24:28.347698 containerd[1952]: 2025-05-17 00:24:28.332 [INFO][7953] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e" May 17 00:24:28.347698 containerd[1952]: 2025-05-17 00:24:28.341 [INFO][7971] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e" HandleID="k8s-pod-network.55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e" Workload="ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--mm8pw-eth0" May 17 00:24:28.347698 containerd[1952]: 2025-05-17 00:24:28.341 [INFO][7971] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:28.347698 containerd[1952]: 2025-05-17 00:24:28.341 [INFO][7971] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:28.347698 containerd[1952]: 2025-05-17 00:24:28.345 [WARNING][7971] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e" HandleID="k8s-pod-network.55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e" Workload="ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--mm8pw-eth0" May 17 00:24:28.347698 containerd[1952]: 2025-05-17 00:24:28.345 [INFO][7971] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e" HandleID="k8s-pod-network.55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e" Workload="ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--mm8pw-eth0" May 17 00:24:28.347698 containerd[1952]: 2025-05-17 00:24:28.346 [INFO][7971] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:28.347698 containerd[1952]: 2025-05-17 00:24:28.346 [INFO][7953] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e" May 17 00:24:28.347698 containerd[1952]: time="2025-05-17T00:24:28.347689161Z" level=info msg="TearDown network for sandbox \"55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e\" successfully" May 17 00:24:28.348051 containerd[1952]: time="2025-05-17T00:24:28.347709322Z" level=info msg="StopPodSandbox for \"55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e\" returns successfully" May 17 00:24:28.348051 containerd[1952]: time="2025-05-17T00:24:28.347961399Z" level=info msg="RemovePodSandbox for \"55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e\"" May 17 00:24:28.348051 containerd[1952]: time="2025-05-17T00:24:28.347980803Z" level=info msg="Forcibly stopping sandbox \"55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e\"" May 17 00:24:28.380627 containerd[1952]: 2025-05-17 00:24:28.364 [WARNING][7996] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--mm8pw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ed62d733-94ab-45e2-84ca-07ad82b2634c", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-750554c5a6", ContainerID:"a616171e2cc431d442e3b10efffbeb2eb086b9c4ddf0985607b63f7d4a9a2855", Pod:"coredns-7c65d6cfc9-mm8pw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali580bc0191ad", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:28.380627 containerd[1952]: 2025-05-17 00:24:28.364 [INFO][7996] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e" May 17 00:24:28.380627 containerd[1952]: 2025-05-17 00:24:28.364 [INFO][7996] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e" iface="eth0" netns="" May 17 00:24:28.380627 containerd[1952]: 2025-05-17 00:24:28.364 [INFO][7996] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e" May 17 00:24:28.380627 containerd[1952]: 2025-05-17 00:24:28.364 [INFO][7996] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e" May 17 00:24:28.380627 containerd[1952]: 2025-05-17 00:24:28.374 [INFO][8013] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e" HandleID="k8s-pod-network.55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e" Workload="ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--mm8pw-eth0" May 17 00:24:28.380627 containerd[1952]: 2025-05-17 00:24:28.374 [INFO][8013] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:28.380627 containerd[1952]: 2025-05-17 00:24:28.374 [INFO][8013] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:28.380627 containerd[1952]: 2025-05-17 00:24:28.378 [WARNING][8013] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e" HandleID="k8s-pod-network.55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e" Workload="ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--mm8pw-eth0" May 17 00:24:28.380627 containerd[1952]: 2025-05-17 00:24:28.378 [INFO][8013] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e" HandleID="k8s-pod-network.55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e" Workload="ci--4081.3.3--n--750554c5a6-k8s-coredns--7c65d6cfc9--mm8pw-eth0" May 17 00:24:28.380627 containerd[1952]: 2025-05-17 00:24:28.379 [INFO][8013] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:28.380627 containerd[1952]: 2025-05-17 00:24:28.379 [INFO][7996] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e" May 17 00:24:28.380927 containerd[1952]: time="2025-05-17T00:24:28.380653028Z" level=info msg="TearDown network for sandbox \"55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e\" successfully" May 17 00:24:28.404338 kubelet[3260]: E0517 00:24:28.404320 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-dfcb87d55-nzxqv" podUID="d4055bd2-75b3-4d87-b331-17976496ef74" May 17 00:24:28.586660 containerd[1952]: time="2025-05-17T00:24:28.586460958Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:24:28.586660 containerd[1952]: time="2025-05-17T00:24:28.586606957Z" level=info msg="RemovePodSandbox \"55bc3237f8a271b7eba511099f2d721b7ba785290912c5b82cc0c0c2a0656a4e\" returns successfully" May 17 00:24:33.406739 kubelet[3260]: E0517 00:24:33.406639 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-jr8s4" podUID="307fc931-df0d-46ad-a0f5-9c642c960ef0" May 17 00:24:41.155836 kubelet[3260]: I0517 00:24:41.155720 3260 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:24:42.406650 containerd[1952]: time="2025-05-17T00:24:42.406557749Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:24:42.704840 containerd[1952]: time="2025-05-17T00:24:42.704582685Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:24:42.705557 containerd[1952]: time="2025-05-17T00:24:42.705542613Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:24:42.705619 containerd[1952]: time="2025-05-17T00:24:42.705585037Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:24:42.705689 kubelet[3260]: E0517 00:24:42.705660 3260 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:24:42.705959 kubelet[3260]: E0517 00:24:42.705696 3260 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:24:42.705959 kubelet[3260]: E0517 00:24:42.705758 3260 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:89e605ba6a4a488d920453165d66bb98,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5sts8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-dfcb87d55-nzxqv_calico-system(d4055bd2-75b3-4d87-b331-17976496ef74): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:24:42.707223 containerd[1952]: time="2025-05-17T00:24:42.707173343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:24:43.024939 containerd[1952]: time="2025-05-17T00:24:43.024784222Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:24:43.025714 containerd[1952]: time="2025-05-17T00:24:43.025642946Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:24:43.025746 containerd[1952]: time="2025-05-17T00:24:43.025725418Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:24:43.025872 kubelet[3260]: E0517 00:24:43.025822 3260 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:24:43.025872 kubelet[3260]: E0517 00:24:43.025852 3260 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:24:43.025963 kubelet[3260]: E0517 00:24:43.025912 3260 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5sts8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-dfcb87d55-nzxqv_calico-system(d4055bd2-75b3-4d87-b331-17976496ef74): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:24:43.027202 kubelet[3260]: E0517 00:24:43.027148 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-dfcb87d55-nzxqv" podUID="d4055bd2-75b3-4d87-b331-17976496ef74" May 17 00:24:45.404586 containerd[1952]: time="2025-05-17T00:24:45.404536679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:24:45.739686 containerd[1952]: time="2025-05-17T00:24:45.739635649Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:24:45.740059 containerd[1952]: time="2025-05-17T00:24:45.740041179Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:24:45.740107 containerd[1952]: time="2025-05-17T00:24:45.740090220Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:24:45.740227 kubelet[3260]: E0517 00:24:45.740202 3260 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:24:45.740575 kubelet[3260]: E0517 00:24:45.740236 3260 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:24:45.740575 kubelet[3260]: E0517 00:24:45.740312 3260 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9h4nt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-jr8s4_calico-system(307fc931-df0d-46ad-a0f5-9c642c960ef0): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:24:45.741526 kubelet[3260]: E0517 00:24:45.741509 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-jr8s4" podUID="307fc931-df0d-46ad-a0f5-9c642c960ef0" May 17 00:24:56.406884 kubelet[3260]: E0517 00:24:56.406750 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-dfcb87d55-nzxqv" podUID="d4055bd2-75b3-4d87-b331-17976496ef74" May 17 00:24:57.407771 kubelet[3260]: E0517 00:24:57.407645 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-jr8s4" podUID="307fc931-df0d-46ad-a0f5-9c642c960ef0" May 17 00:25:09.406840 kubelet[3260]: E0517 00:25:09.406740 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-jr8s4" podUID="307fc931-df0d-46ad-a0f5-9c642c960ef0" May 17 00:25:09.407948 kubelet[3260]: E0517 00:25:09.407810 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-dfcb87d55-nzxqv" podUID="d4055bd2-75b3-4d87-b331-17976496ef74" May 17 00:25:23.407280 containerd[1952]: time="2025-05-17T00:25:23.407148525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:25:23.718226 containerd[1952]: time="2025-05-17T00:25:23.718096975Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:25:23.719175 containerd[1952]: time="2025-05-17T00:25:23.719101227Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:25:23.719223 containerd[1952]: time="2025-05-17T00:25:23.719172804Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:25:23.719363 kubelet[3260]: E0517 00:25:23.719298 3260 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:25:23.719363 kubelet[3260]: E0517 00:25:23.719343 3260 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:25:23.719641 kubelet[3260]: E0517 00:25:23.719408 3260 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:89e605ba6a4a488d920453165d66bb98,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5sts8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-dfcb87d55-nzxqv_calico-system(d4055bd2-75b3-4d87-b331-17976496ef74): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:25:23.721174 containerd[1952]: time="2025-05-17T00:25:23.721133821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:25:24.037842 containerd[1952]: time="2025-05-17T00:25:24.037544223Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:25:24.038548 containerd[1952]: time="2025-05-17T00:25:24.038437619Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:25:24.038610 containerd[1952]: time="2025-05-17T00:25:24.038583859Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:25:24.038720 kubelet[3260]: E0517 00:25:24.038657 3260 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:25:24.038720 kubelet[3260]: E0517 00:25:24.038691 3260 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:25:24.038839 kubelet[3260]: E0517 00:25:24.038752 3260 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5sts8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-dfcb87d55-nzxqv_calico-system(d4055bd2-75b3-4d87-b331-17976496ef74): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:25:24.039963 kubelet[3260]: E0517 00:25:24.039925 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-dfcb87d55-nzxqv" podUID="d4055bd2-75b3-4d87-b331-17976496ef74" May 17 00:25:24.406449 kubelet[3260]: E0517 00:25:24.406213 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-jr8s4" podUID="307fc931-df0d-46ad-a0f5-9c642c960ef0" May 17 00:25:36.407806 kubelet[3260]: E0517 00:25:36.407669 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-dfcb87d55-nzxqv" podUID="d4055bd2-75b3-4d87-b331-17976496ef74" May 17 00:25:38.406313 containerd[1952]: time="2025-05-17T00:25:38.406202893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:25:38.735010 containerd[1952]: time="2025-05-17T00:25:38.734955979Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:25:38.735420 containerd[1952]: time="2025-05-17T00:25:38.735399888Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:25:38.735488 containerd[1952]: time="2025-05-17T00:25:38.735454698Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:25:38.735616 kubelet[3260]: E0517 00:25:38.735553 3260 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:25:38.735616 kubelet[3260]: E0517 00:25:38.735596 3260 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:25:38.735841 kubelet[3260]: E0517 00:25:38.735663 3260 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9h4nt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-jr8s4_calico-system(307fc931-df0d-46ad-a0f5-9c642c960ef0): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:25:38.736844 kubelet[3260]: E0517 00:25:38.736793 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-jr8s4" podUID="307fc931-df0d-46ad-a0f5-9c642c960ef0" May 17 00:25:48.407689 kubelet[3260]: E0517 00:25:48.407564 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-dfcb87d55-nzxqv" podUID="d4055bd2-75b3-4d87-b331-17976496ef74" May 17 00:25:50.406314 kubelet[3260]: E0517 00:25:50.406223 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-jr8s4" podUID="307fc931-df0d-46ad-a0f5-9c642c960ef0" May 17 00:25:53.252869 systemd[1]: Started sshd@11-147.75.202.203:22-218.92.0.158:51026.service - OpenSSH per-connection server daemon (218.92.0.158:51026). May 17 00:25:54.310470 sshd[8263]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 00:25:56.393031 sshd[8261]: PAM: Permission denied for root from 218.92.0.158 May 17 00:25:56.671274 sshd[8264]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 00:25:58.026829 sshd[8261]: PAM: Permission denied for root from 218.92.0.158 May 17 00:25:58.305135 sshd[8265]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 00:26:00.603370 sshd[8261]: PAM: Permission denied for root from 218.92.0.158 May 17 00:26:00.741608 sshd[8261]: Received disconnect from 218.92.0.158 port 51026:11: [preauth] May 17 00:26:00.741608 sshd[8261]: Disconnected from authenticating user root 218.92.0.158 port 51026 [preauth] May 17 00:26:00.745040 systemd[1]: sshd@11-147.75.202.203:22-218.92.0.158:51026.service: Deactivated successfully. May 17 00:26:03.407338 kubelet[3260]: E0517 00:26:03.407198 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-dfcb87d55-nzxqv" podUID="d4055bd2-75b3-4d87-b331-17976496ef74" May 17 00:26:05.405869 kubelet[3260]: E0517 00:26:05.405780 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-jr8s4" podUID="307fc931-df0d-46ad-a0f5-9c642c960ef0" May 17 00:26:17.408132 kubelet[3260]: E0517 00:26:17.408024 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-dfcb87d55-nzxqv" podUID="d4055bd2-75b3-4d87-b331-17976496ef74" May 17 00:26:19.405891 kubelet[3260]: E0517 00:26:19.405762 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-jr8s4" podUID="307fc931-df0d-46ad-a0f5-9c642c960ef0" May 17 00:26:28.407118 kubelet[3260]: E0517 00:26:28.407016 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-dfcb87d55-nzxqv" podUID="d4055bd2-75b3-4d87-b331-17976496ef74" May 17 00:26:31.406968 kubelet[3260]: E0517 00:26:31.406841 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-jr8s4" podUID="307fc931-df0d-46ad-a0f5-9c642c960ef0" May 17 00:26:41.455244 systemd[1]: Started sshd@12-147.75.202.203:22-14.103.122.89:58264.service - OpenSSH per-connection server daemon (14.103.122.89:58264). May 17 00:26:43.406071 kubelet[3260]: E0517 00:26:43.405965 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-jr8s4" podUID="307fc931-df0d-46ad-a0f5-9c642c960ef0" May 17 00:26:43.407137 kubelet[3260]: E0517 00:26:43.406825 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-dfcb87d55-nzxqv" podUID="d4055bd2-75b3-4d87-b331-17976496ef74" May 17 00:26:47.441193 sshd[8367]: Connection closed by 14.103.122.89 port 58264 [preauth] May 17 00:26:47.443121 systemd[1]: sshd@12-147.75.202.203:22-14.103.122.89:58264.service: Deactivated successfully. May 17 00:26:55.406686 kubelet[3260]: E0517 00:26:55.406610 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-jr8s4" podUID="307fc931-df0d-46ad-a0f5-9c642c960ef0" May 17 00:26:57.409338 containerd[1952]: time="2025-05-17T00:26:57.409217317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:26:57.806298 containerd[1952]: time="2025-05-17T00:26:57.806132385Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:26:57.807159 containerd[1952]: time="2025-05-17T00:26:57.807063607Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:26:57.807197 containerd[1952]: time="2025-05-17T00:26:57.807157051Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:26:57.807353 kubelet[3260]: E0517 00:26:57.807312 3260 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:26:57.807353 kubelet[3260]: E0517 00:26:57.807345 3260 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:26:57.807657 kubelet[3260]: E0517 00:26:57.807404 3260 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:89e605ba6a4a488d920453165d66bb98,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5sts8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-dfcb87d55-nzxqv_calico-system(d4055bd2-75b3-4d87-b331-17976496ef74): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:26:57.809382 containerd[1952]: time="2025-05-17T00:26:57.809342463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:26:58.163063 containerd[1952]: time="2025-05-17T00:26:58.162785716Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:26:58.163803 containerd[1952]: time="2025-05-17T00:26:58.163725855Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:26:58.163872 containerd[1952]: time="2025-05-17T00:26:58.163796088Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:26:58.164015 kubelet[3260]: E0517 00:26:58.163958 3260 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:26:58.164015 kubelet[3260]: E0517 00:26:58.163990 3260 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:26:58.164118 kubelet[3260]: E0517 00:26:58.164056 3260 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5sts8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-dfcb87d55-nzxqv_calico-system(d4055bd2-75b3-4d87-b331-17976496ef74): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:26:58.165347 kubelet[3260]: E0517 00:26:58.165302 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-dfcb87d55-nzxqv" podUID="d4055bd2-75b3-4d87-b331-17976496ef74" May 17 00:27:09.407326 containerd[1952]: time="2025-05-17T00:27:09.407249417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:27:09.724891 containerd[1952]: time="2025-05-17T00:27:09.724746471Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:27:09.725775 containerd[1952]: time="2025-05-17T00:27:09.725749926Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:27:09.725845 containerd[1952]: time="2025-05-17T00:27:09.725830920Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:27:09.726006 kubelet[3260]: E0517 00:27:09.725940 3260 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:27:09.726006 kubelet[3260]: E0517 00:27:09.726002 3260 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:27:09.726263 kubelet[3260]: E0517 00:27:09.726077 3260 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9h4nt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-jr8s4_calico-system(307fc931-df0d-46ad-a0f5-9c642c960ef0): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:27:09.727398 kubelet[3260]: E0517 00:27:09.727384 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-jr8s4" podUID="307fc931-df0d-46ad-a0f5-9c642c960ef0" May 17 00:27:11.407619 kubelet[3260]: E0517 00:27:11.407490 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-dfcb87d55-nzxqv" podUID="d4055bd2-75b3-4d87-b331-17976496ef74" May 17 00:27:21.406692 kubelet[3260]: E0517 00:27:21.406590 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-jr8s4" podUID="307fc931-df0d-46ad-a0f5-9c642c960ef0" May 17 00:27:23.406868 kubelet[3260]: E0517 00:27:23.406704 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-dfcb87d55-nzxqv" podUID="d4055bd2-75b3-4d87-b331-17976496ef74" May 17 00:27:34.406419 kubelet[3260]: E0517 00:27:34.406333 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-jr8s4" podUID="307fc931-df0d-46ad-a0f5-9c642c960ef0" May 17 00:27:37.409297 kubelet[3260]: E0517 00:27:37.409183 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-dfcb87d55-nzxqv" podUID="d4055bd2-75b3-4d87-b331-17976496ef74" May 17 00:27:45.406201 kubelet[3260]: E0517 00:27:45.406084 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-jr8s4" podUID="307fc931-df0d-46ad-a0f5-9c642c960ef0" May 17 00:27:52.406980 kubelet[3260]: E0517 00:27:52.406876 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-dfcb87d55-nzxqv" podUID="d4055bd2-75b3-4d87-b331-17976496ef74" May 17 00:27:56.406582 kubelet[3260]: E0517 00:27:56.406485 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-jr8s4" podUID="307fc931-df0d-46ad-a0f5-9c642c960ef0" May 17 00:28:06.408141 kubelet[3260]: E0517 00:28:06.408007 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-dfcb87d55-nzxqv" podUID="d4055bd2-75b3-4d87-b331-17976496ef74" May 17 00:28:11.406728 kubelet[3260]: E0517 00:28:11.406608 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-jr8s4" podUID="307fc931-df0d-46ad-a0f5-9c642c960ef0" May 17 00:28:18.270710 systemd[1]: Started sshd@13-147.75.202.203:22-218.92.0.158:10542.service - OpenSSH per-connection server daemon (218.92.0.158:10542). May 17 00:28:19.381690 sshd[8627]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 00:28:19.407404 kubelet[3260]: E0517 00:28:19.407314 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-dfcb87d55-nzxqv" podUID="d4055bd2-75b3-4d87-b331-17976496ef74" May 17 00:28:21.369481 sshd[8625]: PAM: Permission denied for root from 218.92.0.158 May 17 00:28:21.663278 sshd[8628]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 00:28:23.590866 sshd[8625]: PAM: Permission denied for root from 218.92.0.158 May 17 00:28:23.887236 sshd[8629]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 00:28:24.111868 systemd[1]: Started sshd@14-147.75.202.203:22-14.103.122.89:57022.service - OpenSSH per-connection server daemon (14.103.122.89:57022). May 17 00:28:25.406426 kubelet[3260]: E0517 00:28:25.406337 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-jr8s4" podUID="307fc931-df0d-46ad-a0f5-9c642c960ef0" May 17 00:28:26.426697 sshd[8625]: PAM: Permission denied for root from 218.92.0.158 May 17 00:28:26.572684 sshd[8625]: Received disconnect from 218.92.0.158 port 10542:11: [preauth] May 17 00:28:26.572684 sshd[8625]: Disconnected from authenticating user root 218.92.0.158 port 10542 [preauth] May 17 00:28:26.577638 systemd[1]: sshd@13-147.75.202.203:22-218.92.0.158:10542.service: Deactivated successfully. May 17 00:28:32.407783 kubelet[3260]: E0517 00:28:32.407675 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-dfcb87d55-nzxqv" podUID="d4055bd2-75b3-4d87-b331-17976496ef74" May 17 00:28:38.404361 kubelet[3260]: E0517 00:28:38.404332 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-jr8s4" podUID="307fc931-df0d-46ad-a0f5-9c642c960ef0" May 17 00:28:43.407705 kubelet[3260]: E0517 00:28:43.407574 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-dfcb87d55-nzxqv" podUID="d4055bd2-75b3-4d87-b331-17976496ef74" May 17 00:28:52.406154 kubelet[3260]: E0517 00:28:52.405948 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-jr8s4" podUID="307fc931-df0d-46ad-a0f5-9c642c960ef0" May 17 00:28:58.407475 kubelet[3260]: E0517 00:28:58.407378 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-dfcb87d55-nzxqv" podUID="d4055bd2-75b3-4d87-b331-17976496ef74" May 17 00:29:06.406240 kubelet[3260]: E0517 00:29:06.406147 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-jr8s4" podUID="307fc931-df0d-46ad-a0f5-9c642c960ef0" May 17 00:29:10.407786 kubelet[3260]: E0517 00:29:10.407691 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-dfcb87d55-nzxqv" podUID="d4055bd2-75b3-4d87-b331-17976496ef74" May 17 00:29:17.408789 kubelet[3260]: E0517 00:29:17.408653 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-jr8s4" podUID="307fc931-df0d-46ad-a0f5-9c642c960ef0" May 17 00:29:21.408314 kubelet[3260]: E0517 00:29:21.408169 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-dfcb87d55-nzxqv" podUID="d4055bd2-75b3-4d87-b331-17976496ef74" May 17 00:29:31.406768 kubelet[3260]: E0517 00:29:31.406640 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-jr8s4" podUID="307fc931-df0d-46ad-a0f5-9c642c960ef0" May 17 00:29:32.406837 kubelet[3260]: E0517 00:29:32.406710 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-dfcb87d55-nzxqv" podUID="d4055bd2-75b3-4d87-b331-17976496ef74" May 17 00:29:43.407196 containerd[1952]: time="2025-05-17T00:29:43.407063471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:29:43.725102 containerd[1952]: time="2025-05-17T00:29:43.724958216Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:29:43.725896 containerd[1952]: time="2025-05-17T00:29:43.725874227Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:29:43.725963 containerd[1952]: time="2025-05-17T00:29:43.725948105Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:29:43.726062 kubelet[3260]: E0517 00:29:43.726039 3260 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:29:43.726274 kubelet[3260]: E0517 00:29:43.726071 3260 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:29:43.726274 kubelet[3260]: E0517 00:29:43.726133 3260 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:89e605ba6a4a488d920453165d66bb98,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5sts8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-dfcb87d55-nzxqv_calico-system(d4055bd2-75b3-4d87-b331-17976496ef74): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:29:43.727812 containerd[1952]: time="2025-05-17T00:29:43.727803042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:29:44.037892 containerd[1952]: time="2025-05-17T00:29:44.037746492Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:29:44.038357 containerd[1952]: time="2025-05-17T00:29:44.038327134Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:29:44.038438 containerd[1952]: time="2025-05-17T00:29:44.038410847Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:29:44.038557 kubelet[3260]: E0517 00:29:44.038531 3260 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:29:44.038632 kubelet[3260]: E0517 00:29:44.038565 3260 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:29:44.038673 kubelet[3260]: E0517 00:29:44.038636 3260 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5sts8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-dfcb87d55-nzxqv_calico-system(d4055bd2-75b3-4d87-b331-17976496ef74): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:29:44.039820 kubelet[3260]: E0517 00:29:44.039801 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-dfcb87d55-nzxqv" podUID="d4055bd2-75b3-4d87-b331-17976496ef74" May 17 00:29:46.406679 kubelet[3260]: E0517 00:29:46.406559 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-jr8s4" podUID="307fc931-df0d-46ad-a0f5-9c642c960ef0" May 17 00:29:58.407537 kubelet[3260]: E0517 00:29:58.407380 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-dfcb87d55-nzxqv" podUID="d4055bd2-75b3-4d87-b331-17976496ef74" May 17 00:30:00.407121 containerd[1952]: time="2025-05-17T00:30:00.407006096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:30:00.728734 containerd[1952]: time="2025-05-17T00:30:00.728589863Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:30:00.729598 containerd[1952]: time="2025-05-17T00:30:00.729494573Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:30:00.729722 containerd[1952]: time="2025-05-17T00:30:00.729618168Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:30:00.729800 kubelet[3260]: E0517 00:30:00.729778 3260 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:30:00.730016 kubelet[3260]: E0517 00:30:00.729809 3260 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:30:00.730016 kubelet[3260]: E0517 00:30:00.729883 3260 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9h4nt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-jr8s4_calico-system(307fc931-df0d-46ad-a0f5-9c642c960ef0): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:30:00.731038 kubelet[3260]: E0517 00:30:00.731024 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-jr8s4" podUID="307fc931-df0d-46ad-a0f5-9c642c960ef0" May 17 00:30:01.386363 systemd[1]: Started sshd@15-147.75.202.203:22-147.75.109.163:38352.service - OpenSSH per-connection server daemon (147.75.109.163:38352). May 17 00:30:01.449524 sshd[8873]: Accepted publickey for core from 147.75.109.163 port 38352 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:30:01.450341 sshd[8873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:30:01.453392 systemd-logind[1937]: New session 12 of user core. May 17 00:30:01.471758 systemd[1]: Started session-12.scope - Session 12 of User core. May 17 00:30:01.603310 sshd[8873]: pam_unix(sshd:session): session closed for user core May 17 00:30:01.604908 systemd[1]: sshd@15-147.75.202.203:22-147.75.109.163:38352.service: Deactivated successfully. May 17 00:30:01.606328 systemd-logind[1937]: Session 12 logged out. Waiting for processes to exit. May 17 00:30:01.606421 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:30:01.607199 systemd-logind[1937]: Removed session 12. May 17 00:30:03.403761 systemd[1]: Started sshd@16-147.75.202.203:22-14.103.122.89:34064.service - OpenSSH per-connection server daemon (14.103.122.89:34064). May 17 00:30:05.241203 sshd[8902]: Invalid user cma from 14.103.122.89 port 34064 May 17 00:30:05.380499 sshd[8902]: Received disconnect from 14.103.122.89 port 34064:11: Bye Bye [preauth] May 17 00:30:05.380499 sshd[8902]: Disconnected from invalid user cma 14.103.122.89 port 34064 [preauth] May 17 00:30:05.382023 systemd[1]: sshd@16-147.75.202.203:22-14.103.122.89:34064.service: Deactivated successfully. May 17 00:30:06.619971 systemd[1]: Started sshd@17-147.75.202.203:22-147.75.109.163:38366.service - OpenSSH per-connection server daemon (147.75.109.163:38366). May 17 00:30:06.659544 sshd[8930]: Accepted publickey for core from 147.75.109.163 port 38366 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:30:06.660303 sshd[8930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:30:06.663073 systemd-logind[1937]: New session 13 of user core. May 17 00:30:06.675715 systemd[1]: Started session-13.scope - Session 13 of User core. May 17 00:30:06.756268 sshd[8930]: pam_unix(sshd:session): session closed for user core May 17 00:30:06.758229 systemd[1]: sshd@17-147.75.202.203:22-147.75.109.163:38366.service: Deactivated successfully. May 17 00:30:06.759356 systemd-logind[1937]: Session 13 logged out. Waiting for processes to exit. May 17 00:30:06.759406 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:30:06.760094 systemd-logind[1937]: Removed session 13. May 17 00:30:11.780803 systemd[1]: Started sshd@18-147.75.202.203:22-147.75.109.163:38350.service - OpenSSH per-connection server daemon (147.75.109.163:38350). May 17 00:30:11.809748 sshd[8958]: Accepted publickey for core from 147.75.109.163 port 38350 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:30:11.813134 sshd[8958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:30:11.824699 systemd-logind[1937]: New session 14 of user core. May 17 00:30:11.843798 systemd[1]: Started session-14.scope - Session 14 of User core. May 17 00:30:11.992205 sshd[8958]: pam_unix(sshd:session): session closed for user core May 17 00:30:12.002842 systemd[1]: Started sshd@19-147.75.202.203:22-147.75.109.163:38358.service - OpenSSH per-connection server daemon (147.75.109.163:38358). May 17 00:30:12.003140 systemd[1]: sshd@18-147.75.202.203:22-147.75.109.163:38350.service: Deactivated successfully. May 17 00:30:12.004836 systemd-logind[1937]: Session 14 logged out. Waiting for processes to exit. May 17 00:30:12.004898 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:30:12.005661 systemd-logind[1937]: Removed session 14. May 17 00:30:12.031886 sshd[8983]: Accepted publickey for core from 147.75.109.163 port 38358 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:30:12.032687 sshd[8983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:30:12.035714 systemd-logind[1937]: New session 15 of user core. May 17 00:30:12.046781 systemd[1]: Started session-15.scope - Session 15 of User core. May 17 00:30:12.186454 sshd[8983]: pam_unix(sshd:session): session closed for user core May 17 00:30:12.194734 systemd[1]: Started sshd@20-147.75.202.203:22-147.75.109.163:38368.service - OpenSSH per-connection server daemon (147.75.109.163:38368). May 17 00:30:12.195107 systemd[1]: sshd@19-147.75.202.203:22-147.75.109.163:38358.service: Deactivated successfully. May 17 00:30:12.196039 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:30:12.196746 systemd-logind[1937]: Session 15 logged out. Waiting for processes to exit. May 17 00:30:12.197446 systemd-logind[1937]: Removed session 15. May 17 00:30:12.222425 sshd[9011]: Accepted publickey for core from 147.75.109.163 port 38368 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:30:12.225960 sshd[9011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:30:12.238135 systemd-logind[1937]: New session 16 of user core. May 17 00:30:12.261215 systemd[1]: Started session-16.scope - Session 16 of User core. May 17 00:30:12.399159 sshd[9011]: pam_unix(sshd:session): session closed for user core May 17 00:30:12.400757 systemd[1]: sshd@20-147.75.202.203:22-147.75.109.163:38368.service: Deactivated successfully. May 17 00:30:12.402205 systemd-logind[1937]: Session 16 logged out. Waiting for processes to exit. May 17 00:30:12.402280 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:30:12.402873 systemd-logind[1937]: Removed session 16. May 17 00:30:13.407930 kubelet[3260]: E0517 00:30:13.407813 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-dfcb87d55-nzxqv" podUID="d4055bd2-75b3-4d87-b331-17976496ef74" May 17 00:30:14.405726 kubelet[3260]: E0517 00:30:14.405632 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-jr8s4" podUID="307fc931-df0d-46ad-a0f5-9c642c960ef0" May 17 00:30:17.414211 systemd[1]: Started sshd@21-147.75.202.203:22-147.75.109.163:38372.service - OpenSSH per-connection server daemon (147.75.109.163:38372). May 17 00:30:17.446999 sshd[9072]: Accepted publickey for core from 147.75.109.163 port 38372 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:30:17.450863 sshd[9072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:30:17.462417 systemd-logind[1937]: New session 17 of user core. May 17 00:30:17.479215 systemd[1]: Started session-17.scope - Session 17 of User core. May 17 00:30:17.614837 sshd[9072]: pam_unix(sshd:session): session closed for user core May 17 00:30:17.616729 systemd[1]: sshd@21-147.75.202.203:22-147.75.109.163:38372.service: Deactivated successfully. May 17 00:30:17.617837 systemd-logind[1937]: Session 17 logged out. Waiting for processes to exit. May 17 00:30:17.617837 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:30:17.618414 systemd-logind[1937]: Removed session 17. May 17 00:30:22.646339 systemd[1]: Started sshd@22-147.75.202.203:22-147.75.109.163:46596.service - OpenSSH per-connection server daemon (147.75.109.163:46596). May 17 00:30:22.678967 sshd[9105]: Accepted publickey for core from 147.75.109.163 port 46596 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:30:22.682414 sshd[9105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:30:22.693998 systemd-logind[1937]: New session 18 of user core. May 17 00:30:22.704251 systemd[1]: Started session-18.scope - Session 18 of User core. May 17 00:30:22.792823 sshd[9105]: pam_unix(sshd:session): session closed for user core May 17 00:30:22.794333 systemd[1]: sshd@22-147.75.202.203:22-147.75.109.163:46596.service: Deactivated successfully. May 17 00:30:22.795770 systemd-logind[1937]: Session 18 logged out. Waiting for processes to exit. May 17 00:30:22.795814 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:30:22.796408 systemd-logind[1937]: Removed session 18. May 17 00:30:24.119466 systemd[1]: sshd@14-147.75.202.203:22-14.103.122.89:57022.service: Deactivated successfully. May 17 00:30:27.405202 kubelet[3260]: E0517 00:30:27.405148 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-jr8s4" podUID="307fc931-df0d-46ad-a0f5-9c642c960ef0" May 17 00:30:27.405463 kubelet[3260]: E0517 00:30:27.405275 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-dfcb87d55-nzxqv" podUID="d4055bd2-75b3-4d87-b331-17976496ef74" May 17 00:30:27.812634 systemd[1]: Started sshd@23-147.75.202.203:22-147.75.109.163:46598.service - OpenSSH per-connection server daemon (147.75.109.163:46598). May 17 00:30:27.840299 sshd[9162]: Accepted publickey for core from 147.75.109.163 port 46598 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:30:27.841405 sshd[9162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:30:27.845369 systemd-logind[1937]: New session 19 of user core. May 17 00:30:27.853891 systemd[1]: Started session-19.scope - Session 19 of User core. May 17 00:30:27.943012 sshd[9162]: pam_unix(sshd:session): session closed for user core May 17 00:30:27.953665 systemd[1]: sshd@23-147.75.202.203:22-147.75.109.163:46598.service: Deactivated successfully. May 17 00:30:27.960796 systemd-logind[1937]: Session 19 logged out. Waiting for processes to exit. May 17 00:30:27.961363 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:30:27.964330 systemd-logind[1937]: Removed session 19. May 17 00:30:32.963300 systemd[1]: Started sshd@24-147.75.202.203:22-147.75.109.163:52386.service - OpenSSH per-connection server daemon (147.75.109.163:52386). May 17 00:30:32.993524 sshd[9210]: Accepted publickey for core from 147.75.109.163 port 52386 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:30:32.994285 sshd[9210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:30:32.997307 systemd-logind[1937]: New session 20 of user core. May 17 00:30:33.015862 systemd[1]: Started session-20.scope - Session 20 of User core. May 17 00:30:33.106673 sshd[9210]: pam_unix(sshd:session): session closed for user core May 17 00:30:33.118849 systemd[1]: Started sshd@25-147.75.202.203:22-147.75.109.163:52402.service - OpenSSH per-connection server daemon (147.75.109.163:52402). May 17 00:30:33.119314 systemd[1]: sshd@24-147.75.202.203:22-147.75.109.163:52386.service: Deactivated successfully. May 17 00:30:33.120257 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:30:33.120979 systemd-logind[1937]: Session 20 logged out. Waiting for processes to exit. May 17 00:30:33.121559 systemd-logind[1937]: Removed session 20. May 17 00:30:33.147268 sshd[9234]: Accepted publickey for core from 147.75.109.163 port 52402 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:30:33.150723 sshd[9234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:30:33.162393 systemd-logind[1937]: New session 21 of user core. May 17 00:30:33.180297 systemd[1]: Started session-21.scope - Session 21 of User core. May 17 00:30:33.358201 sshd[9234]: pam_unix(sshd:session): session closed for user core May 17 00:30:33.382857 systemd[1]: Started sshd@26-147.75.202.203:22-147.75.109.163:52410.service - OpenSSH per-connection server daemon (147.75.109.163:52410). May 17 00:30:33.383323 systemd[1]: sshd@25-147.75.202.203:22-147.75.109.163:52402.service: Deactivated successfully. May 17 00:30:33.384278 systemd[1]: session-21.scope: Deactivated successfully. May 17 00:30:33.384965 systemd-logind[1937]: Session 21 logged out. Waiting for processes to exit. May 17 00:30:33.385547 systemd-logind[1937]: Removed session 21. May 17 00:30:33.410511 sshd[9260]: Accepted publickey for core from 147.75.109.163 port 52410 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:30:33.411602 sshd[9260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:30:33.415613 systemd-logind[1937]: New session 22 of user core. May 17 00:30:33.437182 systemd[1]: Started session-22.scope - Session 22 of User core. May 17 00:30:33.785260 systemd[1]: Started sshd@27-147.75.202.203:22-218.92.0.158:47911.service - OpenSSH per-connection server daemon (218.92.0.158:47911). May 17 00:30:34.787209 sshd[9312]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 00:30:34.797351 sshd[9260]: pam_unix(sshd:session): session closed for user core May 17 00:30:34.813273 systemd[1]: Started sshd@28-147.75.202.203:22-147.75.109.163:52418.service - OpenSSH per-connection server daemon (147.75.109.163:52418). May 17 00:30:34.815139 systemd[1]: sshd@26-147.75.202.203:22-147.75.109.163:52410.service: Deactivated successfully. May 17 00:30:34.819335 systemd[1]: session-22.scope: Deactivated successfully. May 17 00:30:34.822741 systemd-logind[1937]: Session 22 logged out. Waiting for processes to exit. May 17 00:30:34.825698 systemd-logind[1937]: Removed session 22. May 17 00:30:34.871940 sshd[9316]: Accepted publickey for core from 147.75.109.163 port 52418 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:30:34.874213 sshd[9316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:30:34.882251 systemd-logind[1937]: New session 23 of user core. May 17 00:30:34.900162 systemd[1]: Started session-23.scope - Session 23 of User core. May 17 00:30:35.096398 sshd[9316]: pam_unix(sshd:session): session closed for user core May 17 00:30:35.112159 systemd[1]: Started sshd@29-147.75.202.203:22-147.75.109.163:52432.service - OpenSSH per-connection server daemon (147.75.109.163:52432). May 17 00:30:35.113916 systemd[1]: sshd@28-147.75.202.203:22-147.75.109.163:52418.service: Deactivated successfully. May 17 00:30:35.117977 systemd[1]: session-23.scope: Deactivated successfully. May 17 00:30:35.121492 systemd-logind[1937]: Session 23 logged out. Waiting for processes to exit. May 17 00:30:35.124218 systemd-logind[1937]: Removed session 23. May 17 00:30:35.177768 sshd[9345]: Accepted publickey for core from 147.75.109.163 port 52432 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:30:35.180998 sshd[9345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:30:35.192315 systemd-logind[1937]: New session 24 of user core. May 17 00:30:35.207133 systemd[1]: Started session-24.scope - Session 24 of User core. May 17 00:30:35.348662 sshd[9345]: pam_unix(sshd:session): session closed for user core May 17 00:30:35.350348 systemd[1]: sshd@29-147.75.202.203:22-147.75.109.163:52432.service: Deactivated successfully. May 17 00:30:35.351976 systemd-logind[1937]: Session 24 logged out. Waiting for processes to exit. May 17 00:30:35.352106 systemd[1]: session-24.scope: Deactivated successfully. May 17 00:30:35.352812 systemd-logind[1937]: Removed session 24. May 17 00:30:36.307810 sshd[9305]: PAM: Permission denied for root from 218.92.0.158 May 17 00:30:36.579059 sshd[9376]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 00:30:38.375664 sshd[9305]: PAM: Permission denied for root from 218.92.0.158 May 17 00:30:38.646467 sshd[9381]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root May 17 00:30:40.362752 systemd[1]: Started sshd@30-147.75.202.203:22-147.75.109.163:48388.service - OpenSSH per-connection server daemon (147.75.109.163:48388). May 17 00:30:40.382996 sshd[9305]: PAM: Permission denied for root from 218.92.0.158 May 17 00:30:40.390705 sshd[9382]: Accepted publickey for core from 147.75.109.163 port 48388 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:30:40.391437 sshd[9382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:30:40.393982 systemd-logind[1937]: New session 25 of user core. May 17 00:30:40.409208 systemd[1]: Started session-25.scope - Session 25 of User core. May 17 00:30:40.409387 kubelet[3260]: E0517 00:30:40.409269 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-jr8s4" podUID="307fc931-df0d-46ad-a0f5-9c642c960ef0" May 17 00:30:40.489870 sshd[9382]: pam_unix(sshd:session): session closed for user core May 17 00:30:40.491365 systemd[1]: sshd@30-147.75.202.203:22-147.75.109.163:48388.service: Deactivated successfully. May 17 00:30:40.492840 systemd-logind[1937]: Session 25 logged out. Waiting for processes to exit. May 17 00:30:40.492909 systemd[1]: session-25.scope: Deactivated successfully. May 17 00:30:40.493481 systemd-logind[1937]: Removed session 25. May 17 00:30:40.516726 sshd[9305]: Received disconnect from 218.92.0.158 port 47911:11: [preauth] May 17 00:30:40.516726 sshd[9305]: Disconnected from authenticating user root 218.92.0.158 port 47911 [preauth] May 17 00:30:40.517565 systemd[1]: sshd@27-147.75.202.203:22-218.92.0.158:47911.service: Deactivated successfully. May 17 00:30:42.405445 kubelet[3260]: E0517 00:30:42.405373 3260 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-dfcb87d55-nzxqv" podUID="d4055bd2-75b3-4d87-b331-17976496ef74" May 17 00:30:45.511901 systemd[1]: Started sshd@31-147.75.202.203:22-147.75.109.163:48402.service - OpenSSH per-connection server daemon (147.75.109.163:48402). May 17 00:30:45.539842 sshd[9412]: Accepted publickey for core from 147.75.109.163 port 48402 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:30:45.540645 sshd[9412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:30:45.543683 systemd-logind[1937]: New session 26 of user core. May 17 00:30:45.557780 systemd[1]: Started session-26.scope - Session 26 of User core. May 17 00:30:45.646516 sshd[9412]: pam_unix(sshd:session): session closed for user core May 17 00:30:45.648012 systemd[1]: sshd@31-147.75.202.203:22-147.75.109.163:48402.service: Deactivated successfully. May 17 00:30:45.649388 systemd-logind[1937]: Session 26 logged out. Waiting for processes to exit. May 17 00:30:45.649486 systemd[1]: session-26.scope: Deactivated successfully. May 17 00:30:45.650082 systemd-logind[1937]: Removed session 26. May 17 00:30:50.666794 systemd[1]: Started sshd@32-147.75.202.203:22-147.75.109.163:49324.service - OpenSSH per-connection server daemon (147.75.109.163:49324). May 17 00:30:50.694563 sshd[9470]: Accepted publickey for core from 147.75.109.163 port 49324 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 00:30:50.695570 sshd[9470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:30:50.699344 systemd-logind[1937]: New session 27 of user core. May 17 00:30:50.716943 systemd[1]: Started session-27.scope - Session 27 of User core. May 17 00:30:50.804585 sshd[9470]: pam_unix(sshd:session): session closed for user core May 17 00:30:50.806074 systemd[1]: sshd@32-147.75.202.203:22-147.75.109.163:49324.service: Deactivated successfully. May 17 00:30:50.807499 systemd-logind[1937]: Session 27 logged out. Waiting for processes to exit. May 17 00:30:50.807601 systemd[1]: session-27.scope: Deactivated successfully. May 17 00:30:50.808235 systemd-logind[1937]: Removed session 27.