May 17 01:43:44.015463 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri May 16 22:44:56 -00 2025 May 17 01:43:44.015477 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 01:43:44.015484 kernel: BIOS-provided physical RAM map: May 17 01:43:44.015488 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable May 17 01:43:44.015492 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved May 17 01:43:44.015496 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved May 17 01:43:44.015501 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable May 17 01:43:44.015505 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved May 17 01:43:44.015509 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b1dfff] usable May 17 01:43:44.015513 kernel: BIOS-e820: [mem 0x0000000081b1e000-0x0000000081b1efff] ACPI NVS May 17 01:43:44.015517 kernel: BIOS-e820: [mem 0x0000000081b1f000-0x0000000081b1ffff] reserved May 17 01:43:44.015522 kernel: BIOS-e820: [mem 0x0000000081b20000-0x000000008afccfff] usable May 17 01:43:44.015526 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved May 17 01:43:44.015531 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable May 17 01:43:44.015536 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS May 17 01:43:44.015541 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved May 17 01:43:44.015546 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable May 17 01:43:44.015551 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved May 17 01:43:44.015556 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 17 01:43:44.015560 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved May 17 01:43:44.015565 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved May 17 01:43:44.015570 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved May 17 01:43:44.015574 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved May 17 01:43:44.015579 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable May 17 01:43:44.015583 kernel: NX (Execute Disable) protection: active May 17 01:43:44.015588 kernel: APIC: Static calls initialized May 17 01:43:44.015593 kernel: SMBIOS 3.2.1 present. May 17 01:43:44.015597 kernel: DMI: Supermicro X11SCM-F/X11SCM-F, BIOS 1.9 09/16/2022 May 17 01:43:44.015603 kernel: tsc: Detected 3400.000 MHz processor May 17 01:43:44.015608 kernel: tsc: Detected 3399.906 MHz TSC May 17 01:43:44.015612 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 01:43:44.015618 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 01:43:44.015622 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 May 17 01:43:44.015627 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs May 17 01:43:44.015632 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 01:43:44.015637 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 May 17 01:43:44.015641 kernel: Using GB pages for direct mapping May 17 01:43:44.015647 kernel: ACPI: Early table checksum verification disabled May 17 01:43:44.015652 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) May 17 01:43:44.015657 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) May 17 01:43:44.015663 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) May 17 01:43:44.015669 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) May 17 01:43:44.015674 kernel: ACPI: FACS 0x000000008C66CF80 000040 May 17 01:43:44.015679 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) May 17 01:43:44.015685 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) May 17 01:43:44.015690 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) May 17 01:43:44.015695 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) May 17 01:43:44.015700 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) May 17 01:43:44.015705 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) May 17 01:43:44.015710 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) May 17 01:43:44.015715 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) May 17 01:43:44.015721 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) May 17 01:43:44.015726 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) May 17 01:43:44.015731 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) May 17 01:43:44.015736 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) May 17 01:43:44.015741 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) May 17 01:43:44.015746 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) May 17 01:43:44.015751 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) May 17 01:43:44.015756 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) May 17 01:43:44.015761 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) May 17 01:43:44.015767 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) May 17 01:43:44.015772 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) May 17 01:43:44.015777 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) May 17 01:43:44.015782 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) May 17 01:43:44.015787 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) May 17 01:43:44.015792 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) May 17 01:43:44.015797 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) May 17 01:43:44.015802 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) May 17 01:43:44.015808 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) May 17 01:43:44.015813 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) May 17 01:43:44.015818 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) May 17 01:43:44.015823 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] May 17 01:43:44.015828 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] May 17 01:43:44.015833 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] May 17 01:43:44.015838 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] May 17 01:43:44.015843 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] May 17 01:43:44.015848 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] May 17 01:43:44.015854 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] May 17 01:43:44.015858 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] May 17 01:43:44.015863 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] May 17 01:43:44.015868 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] May 17 01:43:44.015873 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] May 17 01:43:44.015878 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] May 17 01:43:44.015883 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] May 17 01:43:44.015888 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] May 17 01:43:44.015893 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] May 17 01:43:44.015899 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] May 17 01:43:44.015904 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] May 17 01:43:44.015909 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] May 17 01:43:44.015914 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] May 17 01:43:44.015919 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] May 17 01:43:44.015924 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] May 17 01:43:44.015929 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] May 17 01:43:44.015934 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] May 17 01:43:44.015939 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] May 17 01:43:44.015943 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] May 17 01:43:44.015949 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] May 17 01:43:44.015954 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] May 17 01:43:44.015959 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] May 17 01:43:44.015964 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] May 17 01:43:44.015969 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] May 17 01:43:44.015974 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] May 17 01:43:44.015979 kernel: No NUMA configuration found May 17 01:43:44.015984 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] May 17 01:43:44.015989 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] May 17 01:43:44.015995 kernel: Zone ranges: May 17 01:43:44.016000 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 01:43:44.016006 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 17 01:43:44.016011 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] May 17 01:43:44.016016 kernel: Movable zone start for each node May 17 01:43:44.016020 kernel: Early memory node ranges May 17 01:43:44.016025 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] May 17 01:43:44.016030 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] May 17 01:43:44.016035 kernel: node 0: [mem 0x0000000040400000-0x0000000081b1dfff] May 17 01:43:44.016041 kernel: node 0: [mem 0x0000000081b20000-0x000000008afccfff] May 17 01:43:44.016046 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] May 17 01:43:44.016051 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] May 17 01:43:44.016056 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] May 17 01:43:44.016065 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] May 17 01:43:44.016071 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 01:43:44.016076 kernel: On node 0, zone DMA: 103 pages in unavailable ranges May 17 01:43:44.016081 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges May 17 01:43:44.016087 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges May 17 01:43:44.016093 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges May 17 01:43:44.016098 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges May 17 01:43:44.016104 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges May 17 01:43:44.016109 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges May 17 01:43:44.016114 kernel: ACPI: PM-Timer IO Port: 0x1808 May 17 01:43:44.016120 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) May 17 01:43:44.016125 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) May 17 01:43:44.016130 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) May 17 01:43:44.016137 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) May 17 01:43:44.016142 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) May 17 01:43:44.016147 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) May 17 01:43:44.016153 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) May 17 01:43:44.016158 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) May 17 01:43:44.016163 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) May 17 01:43:44.016168 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) May 17 01:43:44.016174 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) May 17 01:43:44.016179 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) May 17 01:43:44.016185 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) May 17 01:43:44.016190 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) May 17 01:43:44.016196 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) May 17 01:43:44.016201 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) May 17 01:43:44.016206 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 May 17 01:43:44.016211 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 17 01:43:44.016217 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 01:43:44.016222 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 01:43:44.016228 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 17 01:43:44.016234 kernel: TSC deadline timer available May 17 01:43:44.016239 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs May 17 01:43:44.016244 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices May 17 01:43:44.016250 kernel: Booting paravirtualized kernel on bare hardware May 17 01:43:44.016255 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 01:43:44.016261 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 May 17 01:43:44.016266 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 May 17 01:43:44.016274 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 May 17 01:43:44.016280 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 May 17 01:43:44.016286 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 01:43:44.016313 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 01:43:44.016319 kernel: random: crng init done May 17 01:43:44.016324 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) May 17 01:43:44.016330 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) May 17 01:43:44.016335 kernel: Fallback order for Node 0: 0 May 17 01:43:44.016354 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 May 17 01:43:44.016359 kernel: Policy zone: Normal May 17 01:43:44.016366 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 01:43:44.016371 kernel: software IO TLB: area num 16. May 17 01:43:44.016377 kernel: Memory: 32720300K/33452980K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42872K init, 2320K bss, 732420K reserved, 0K cma-reserved) May 17 01:43:44.016382 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 May 17 01:43:44.016388 kernel: ftrace: allocating 37948 entries in 149 pages May 17 01:43:44.016393 kernel: ftrace: allocated 149 pages with 4 groups May 17 01:43:44.016398 kernel: Dynamic Preempt: voluntary May 17 01:43:44.016404 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 01:43:44.016409 kernel: rcu: RCU event tracing is enabled. May 17 01:43:44.016416 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. May 17 01:43:44.016421 kernel: Trampoline variant of Tasks RCU enabled. May 17 01:43:44.016427 kernel: Rude variant of Tasks RCU enabled. May 17 01:43:44.016432 kernel: Tracing variant of Tasks RCU enabled. May 17 01:43:44.016437 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 01:43:44.016443 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 May 17 01:43:44.016448 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 May 17 01:43:44.016453 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 01:43:44.016459 kernel: Console: colour dummy device 80x25 May 17 01:43:44.016464 kernel: printk: console [tty0] enabled May 17 01:43:44.016470 kernel: printk: console [ttyS1] enabled May 17 01:43:44.016476 kernel: ACPI: Core revision 20230628 May 17 01:43:44.016481 kernel: hpet: HPET dysfunctional in PC10. Force disabled. May 17 01:43:44.016487 kernel: APIC: Switch to symmetric I/O mode setup May 17 01:43:44.016492 kernel: DMAR: Host address width 39 May 17 01:43:44.016497 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 May 17 01:43:44.016503 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da May 17 01:43:44.016508 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff May 17 01:43:44.016513 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 May 17 01:43:44.016520 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 May 17 01:43:44.016525 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. May 17 01:43:44.016531 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode May 17 01:43:44.016536 kernel: x2apic enabled May 17 01:43:44.016542 kernel: APIC: Switched APIC routing to: cluster x2apic May 17 01:43:44.016547 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns May 17 01:43:44.016553 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) May 17 01:43:44.016558 kernel: CPU0: Thermal monitoring enabled (TM1) May 17 01:43:44.016563 kernel: process: using mwait in idle threads May 17 01:43:44.016570 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 17 01:43:44.016575 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 17 01:43:44.016580 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 01:43:44.016585 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit May 17 01:43:44.016591 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall May 17 01:43:44.016596 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS May 17 01:43:44.016601 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT May 17 01:43:44.016607 kernel: RETBleed: Mitigation: Enhanced IBRS May 17 01:43:44.016612 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 17 01:43:44.016617 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 17 01:43:44.016622 kernel: TAA: Mitigation: TSX disabled May 17 01:43:44.016629 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers May 17 01:43:44.016634 kernel: SRBDS: Mitigation: Microcode May 17 01:43:44.016639 kernel: GDS: Mitigation: Microcode May 17 01:43:44.016645 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 01:43:44.016650 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 01:43:44.016655 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 01:43:44.016661 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' May 17 01:43:44.016666 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' May 17 01:43:44.016671 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 01:43:44.016677 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 May 17 01:43:44.016682 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 May 17 01:43:44.016688 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. May 17 01:43:44.016694 kernel: Freeing SMP alternatives memory: 32K May 17 01:43:44.016699 kernel: pid_max: default: 32768 minimum: 301 May 17 01:43:44.016704 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 17 01:43:44.016709 kernel: landlock: Up and running. May 17 01:43:44.016715 kernel: SELinux: Initializing. May 17 01:43:44.016720 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 01:43:44.016725 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 01:43:44.016731 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) May 17 01:43:44.016736 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. May 17 01:43:44.016742 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. May 17 01:43:44.016748 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. May 17 01:43:44.016754 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. May 17 01:43:44.016759 kernel: ... version: 4 May 17 01:43:44.016764 kernel: ... bit width: 48 May 17 01:43:44.016770 kernel: ... generic registers: 4 May 17 01:43:44.016775 kernel: ... value mask: 0000ffffffffffff May 17 01:43:44.016780 kernel: ... max period: 00007fffffffffff May 17 01:43:44.016786 kernel: ... fixed-purpose events: 3 May 17 01:43:44.016791 kernel: ... event mask: 000000070000000f May 17 01:43:44.016797 kernel: signal: max sigframe size: 2032 May 17 01:43:44.016803 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 May 17 01:43:44.016808 kernel: rcu: Hierarchical SRCU implementation. May 17 01:43:44.016813 kernel: rcu: Max phase no-delay instances is 400. May 17 01:43:44.016819 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. May 17 01:43:44.016824 kernel: smp: Bringing up secondary CPUs ... May 17 01:43:44.016829 kernel: smpboot: x86: Booting SMP configuration: May 17 01:43:44.016835 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 May 17 01:43:44.016841 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. May 17 01:43:44.016847 kernel: smp: Brought up 1 node, 16 CPUs May 17 01:43:44.016852 kernel: smpboot: Max logical packages: 1 May 17 01:43:44.016858 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) May 17 01:43:44.016863 kernel: devtmpfs: initialized May 17 01:43:44.016868 kernel: x86/mm: Memory block size: 128MB May 17 01:43:44.016874 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b1e000-0x81b1efff] (4096 bytes) May 17 01:43:44.016879 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) May 17 01:43:44.016885 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 01:43:44.016891 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) May 17 01:43:44.016896 kernel: pinctrl core: initialized pinctrl subsystem May 17 01:43:44.016901 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 01:43:44.016907 kernel: audit: initializing netlink subsys (disabled) May 17 01:43:44.016912 kernel: audit: type=2000 audit(1747446218.039:1): state=initialized audit_enabled=0 res=1 May 17 01:43:44.016917 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 01:43:44.016923 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 01:43:44.016928 kernel: cpuidle: using governor menu May 17 01:43:44.016933 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 01:43:44.016955 kernel: dca service started, version 1.12.1 May 17 01:43:44.016961 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) May 17 01:43:44.016966 kernel: PCI: Using configuration type 1 for base access May 17 01:43:44.016985 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' May 17 01:43:44.016991 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 01:43:44.016996 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 17 01:43:44.017001 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 17 01:43:44.017007 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 01:43:44.017012 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 17 01:43:44.017018 kernel: ACPI: Added _OSI(Module Device) May 17 01:43:44.017024 kernel: ACPI: Added _OSI(Processor Device) May 17 01:43:44.017029 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 01:43:44.017034 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 01:43:44.017040 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded May 17 01:43:44.017045 kernel: ACPI: Dynamic OEM Table Load: May 17 01:43:44.017050 kernel: ACPI: SSDT 0xFFFF900E40E5A000 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) May 17 01:43:44.017056 kernel: ACPI: Dynamic OEM Table Load: May 17 01:43:44.017061 kernel: ACPI: SSDT 0xFFFF900E41E2C800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) May 17 01:43:44.017067 kernel: ACPI: Dynamic OEM Table Load: May 17 01:43:44.017073 kernel: ACPI: SSDT 0xFFFF900E40E04A00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) May 17 01:43:44.017078 kernel: ACPI: Dynamic OEM Table Load: May 17 01:43:44.017083 kernel: ACPI: SSDT 0xFFFF900E41582000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) May 17 01:43:44.017089 kernel: ACPI: Dynamic OEM Table Load: May 17 01:43:44.017094 kernel: ACPI: SSDT 0xFFFF900E42448000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) May 17 01:43:44.017099 kernel: ACPI: Dynamic OEM Table Load: May 17 01:43:44.017105 kernel: ACPI: SSDT 0xFFFF900E42453400 00030A (v02 PmRef ApCst 00003000 INTL 20160527) May 17 01:43:44.017110 kernel: ACPI: _OSC evaluated successfully for all CPUs May 17 01:43:44.017115 kernel: ACPI: Interpreter enabled May 17 01:43:44.017122 kernel: ACPI: PM: (supports S0 S5) May 17 01:43:44.017127 kernel: ACPI: Using IOAPIC for interrupt routing May 17 01:43:44.017132 kernel: HEST: Enabling Firmware First mode for corrected errors. May 17 01:43:44.017138 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. May 17 01:43:44.017143 kernel: HEST: Table parsing has been initialized. May 17 01:43:44.017148 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. May 17 01:43:44.017154 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 01:43:44.017159 kernel: PCI: Using E820 reservations for host bridge windows May 17 01:43:44.017164 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F May 17 01:43:44.017171 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource May 17 01:43:44.017176 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource May 17 01:43:44.017182 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource May 17 01:43:44.017187 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource May 17 01:43:44.017192 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource May 17 01:43:44.017198 kernel: ACPI: \_TZ_.FN00: New power resource May 17 01:43:44.017203 kernel: ACPI: \_TZ_.FN01: New power resource May 17 01:43:44.017209 kernel: ACPI: \_TZ_.FN02: New power resource May 17 01:43:44.017214 kernel: ACPI: \_TZ_.FN03: New power resource May 17 01:43:44.017220 kernel: ACPI: \_TZ_.FN04: New power resource May 17 01:43:44.017226 kernel: ACPI: \PIN_: New power resource May 17 01:43:44.017231 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) May 17 01:43:44.017340 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 01:43:44.017394 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] May 17 01:43:44.017441 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] May 17 01:43:44.017449 kernel: PCI host bridge to bus 0000:00 May 17 01:43:44.017499 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 17 01:43:44.017542 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 17 01:43:44.017584 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 17 01:43:44.017625 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] May 17 01:43:44.017666 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] May 17 01:43:44.017707 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] May 17 01:43:44.017766 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 May 17 01:43:44.017821 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 May 17 01:43:44.017870 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold May 17 01:43:44.017922 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 May 17 01:43:44.017969 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] May 17 01:43:44.018020 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 May 17 01:43:44.018066 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] May 17 01:43:44.018121 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 May 17 01:43:44.018167 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] May 17 01:43:44.018214 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold May 17 01:43:44.018264 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 May 17 01:43:44.018355 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] May 17 01:43:44.018401 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] May 17 01:43:44.018456 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 May 17 01:43:44.018504 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] May 17 01:43:44.018554 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 May 17 01:43:44.018601 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] May 17 01:43:44.018650 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 May 17 01:43:44.018697 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] May 17 01:43:44.018744 kernel: pci 0000:00:16.0: PME# supported from D3hot May 17 01:43:44.018797 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 May 17 01:43:44.018843 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] May 17 01:43:44.018898 kernel: pci 0000:00:16.1: PME# supported from D3hot May 17 01:43:44.018948 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 May 17 01:43:44.018996 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] May 17 01:43:44.019042 kernel: pci 0000:00:16.4: PME# supported from D3hot May 17 01:43:44.019094 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 May 17 01:43:44.019141 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] May 17 01:43:44.019188 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] May 17 01:43:44.019234 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] May 17 01:43:44.019285 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] May 17 01:43:44.019332 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] May 17 01:43:44.019379 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] May 17 01:43:44.019428 kernel: pci 0000:00:17.0: PME# supported from D3hot May 17 01:43:44.019483 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 May 17 01:43:44.019534 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold May 17 01:43:44.019585 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 May 17 01:43:44.019636 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold May 17 01:43:44.019686 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 May 17 01:43:44.019734 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold May 17 01:43:44.019786 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 May 17 01:43:44.019834 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold May 17 01:43:44.019884 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 May 17 01:43:44.019934 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold May 17 01:43:44.019984 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 May 17 01:43:44.020032 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] May 17 01:43:44.020083 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 May 17 01:43:44.020135 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 May 17 01:43:44.020183 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] May 17 01:43:44.020231 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] May 17 01:43:44.020315 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 May 17 01:43:44.020365 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] May 17 01:43:44.020418 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 May 17 01:43:44.020466 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] May 17 01:43:44.020515 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] May 17 01:43:44.020565 kernel: pci 0000:01:00.0: PME# supported from D3cold May 17 01:43:44.020613 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] May 17 01:43:44.020661 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) May 17 01:43:44.020715 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 May 17 01:43:44.020763 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] May 17 01:43:44.020812 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] May 17 01:43:44.020861 kernel: pci 0000:01:00.1: PME# supported from D3cold May 17 01:43:44.020910 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] May 17 01:43:44.020959 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) May 17 01:43:44.021007 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 17 01:43:44.021054 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] May 17 01:43:44.021101 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] May 17 01:43:44.021149 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] May 17 01:43:44.021201 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect May 17 01:43:44.021251 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 May 17 01:43:44.021307 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] May 17 01:43:44.021355 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] May 17 01:43:44.021404 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] May 17 01:43:44.021453 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold May 17 01:43:44.021501 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] May 17 01:43:44.021548 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] May 17 01:43:44.021596 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] May 17 01:43:44.021651 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect May 17 01:43:44.021700 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 May 17 01:43:44.021749 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] May 17 01:43:44.021797 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] May 17 01:43:44.021846 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] May 17 01:43:44.021894 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold May 17 01:43:44.021942 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] May 17 01:43:44.021989 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] May 17 01:43:44.022038 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] May 17 01:43:44.022085 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] May 17 01:43:44.022142 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 May 17 01:43:44.022190 kernel: pci 0000:06:00.0: enabling Extended Tags May 17 01:43:44.022239 kernel: pci 0000:06:00.0: supports D1 D2 May 17 01:43:44.022291 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 17 01:43:44.022339 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] May 17 01:43:44.022390 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] May 17 01:43:44.022436 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] May 17 01:43:44.022489 kernel: pci_bus 0000:07: extended config space not accessible May 17 01:43:44.022545 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 May 17 01:43:44.022597 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] May 17 01:43:44.022648 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] May 17 01:43:44.022697 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] May 17 01:43:44.022750 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 17 01:43:44.022799 kernel: pci 0000:07:00.0: supports D1 D2 May 17 01:43:44.022852 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 17 01:43:44.022901 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] May 17 01:43:44.022949 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] May 17 01:43:44.022998 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] May 17 01:43:44.023006 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 May 17 01:43:44.023012 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 May 17 01:43:44.023020 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 May 17 01:43:44.023025 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 May 17 01:43:44.023031 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 May 17 01:43:44.023037 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 May 17 01:43:44.023042 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 May 17 01:43:44.023048 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 May 17 01:43:44.023054 kernel: iommu: Default domain type: Translated May 17 01:43:44.023059 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 01:43:44.023065 kernel: PCI: Using ACPI for IRQ routing May 17 01:43:44.023072 kernel: PCI: pci_cache_line_size set to 64 bytes May 17 01:43:44.023077 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] May 17 01:43:44.023083 kernel: e820: reserve RAM buffer [mem 0x81b1e000-0x83ffffff] May 17 01:43:44.023088 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] May 17 01:43:44.023094 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] May 17 01:43:44.023099 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] May 17 01:43:44.023105 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] May 17 01:43:44.023153 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device May 17 01:43:44.023203 kernel: pci 0000:07:00.0: vgaarb: bridge control possible May 17 01:43:44.023255 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 17 01:43:44.023263 kernel: vgaarb: loaded May 17 01:43:44.023269 kernel: clocksource: Switched to clocksource tsc-early May 17 01:43:44.023293 kernel: VFS: Disk quotas dquot_6.6.0 May 17 01:43:44.023299 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 01:43:44.023305 kernel: pnp: PnP ACPI init May 17 01:43:44.023355 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved May 17 01:43:44.023403 kernel: pnp 00:02: [dma 0 disabled] May 17 01:43:44.023457 kernel: pnp 00:03: [dma 0 disabled] May 17 01:43:44.023505 kernel: system 00:04: [io 0x0680-0x069f] has been reserved May 17 01:43:44.023551 kernel: system 00:04: [io 0x164e-0x164f] has been reserved May 17 01:43:44.023598 kernel: system 00:05: [io 0x1854-0x1857] has been reserved May 17 01:43:44.023645 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved May 17 01:43:44.023690 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved May 17 01:43:44.023735 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved May 17 01:43:44.023780 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved May 17 01:43:44.023825 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved May 17 01:43:44.023870 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved May 17 01:43:44.023913 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved May 17 01:43:44.023958 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved May 17 01:43:44.024005 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved May 17 01:43:44.024052 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved May 17 01:43:44.024094 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved May 17 01:43:44.024138 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved May 17 01:43:44.024181 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved May 17 01:43:44.024225 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved May 17 01:43:44.024269 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved May 17 01:43:44.024320 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved May 17 01:43:44.024346 kernel: pnp: PnP ACPI: found 10 devices May 17 01:43:44.024352 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 01:43:44.024358 kernel: NET: Registered PF_INET protocol family May 17 01:43:44.024364 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 01:43:44.024370 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) May 17 01:43:44.024376 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 01:43:44.024382 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 01:43:44.024388 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 17 01:43:44.024395 kernel: TCP: Hash tables configured (established 262144 bind 65536) May 17 01:43:44.024401 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) May 17 01:43:44.024406 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) May 17 01:43:44.024412 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 01:43:44.024418 kernel: NET: Registered PF_XDP protocol family May 17 01:43:44.024466 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] May 17 01:43:44.024512 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] May 17 01:43:44.024560 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] May 17 01:43:44.024608 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] May 17 01:43:44.024660 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] May 17 01:43:44.024708 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] May 17 01:43:44.024757 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] May 17 01:43:44.024804 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 17 01:43:44.024851 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] May 17 01:43:44.024898 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] May 17 01:43:44.024945 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] May 17 01:43:44.024994 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] May 17 01:43:44.025040 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] May 17 01:43:44.025089 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] May 17 01:43:44.025136 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] May 17 01:43:44.025183 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] May 17 01:43:44.025232 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] May 17 01:43:44.025283 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] May 17 01:43:44.025366 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] May 17 01:43:44.025415 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] May 17 01:43:44.025462 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] May 17 01:43:44.025511 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] May 17 01:43:44.025559 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] May 17 01:43:44.025606 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] May 17 01:43:44.025650 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc May 17 01:43:44.025693 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 17 01:43:44.025735 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 17 01:43:44.025776 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 17 01:43:44.025818 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] May 17 01:43:44.025858 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] May 17 01:43:44.025909 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] May 17 01:43:44.025952 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] May 17 01:43:44.026001 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] May 17 01:43:44.026044 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] May 17 01:43:44.026091 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] May 17 01:43:44.026134 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] May 17 01:43:44.026183 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] May 17 01:43:44.026226 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] May 17 01:43:44.026277 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] May 17 01:43:44.026358 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] May 17 01:43:44.026366 kernel: PCI: CLS 64 bytes, default 64 May 17 01:43:44.026372 kernel: DMAR: No ATSR found May 17 01:43:44.026378 kernel: DMAR: No SATC found May 17 01:43:44.026384 kernel: DMAR: dmar0: Using Queued invalidation May 17 01:43:44.026431 kernel: pci 0000:00:00.0: Adding to iommu group 0 May 17 01:43:44.026478 kernel: pci 0000:00:01.0: Adding to iommu group 1 May 17 01:43:44.026525 kernel: pci 0000:00:08.0: Adding to iommu group 2 May 17 01:43:44.026575 kernel: pci 0000:00:12.0: Adding to iommu group 3 May 17 01:43:44.026621 kernel: pci 0000:00:14.0: Adding to iommu group 4 May 17 01:43:44.026668 kernel: pci 0000:00:14.2: Adding to iommu group 4 May 17 01:43:44.026713 kernel: pci 0000:00:15.0: Adding to iommu group 5 May 17 01:43:44.026760 kernel: pci 0000:00:15.1: Adding to iommu group 5 May 17 01:43:44.026805 kernel: pci 0000:00:16.0: Adding to iommu group 6 May 17 01:43:44.026852 kernel: pci 0000:00:16.1: Adding to iommu group 6 May 17 01:43:44.026898 kernel: pci 0000:00:16.4: Adding to iommu group 6 May 17 01:43:44.026947 kernel: pci 0000:00:17.0: Adding to iommu group 7 May 17 01:43:44.026993 kernel: pci 0000:00:1b.0: Adding to iommu group 8 May 17 01:43:44.027040 kernel: pci 0000:00:1b.4: Adding to iommu group 9 May 17 01:43:44.027087 kernel: pci 0000:00:1b.5: Adding to iommu group 10 May 17 01:43:44.027134 kernel: pci 0000:00:1c.0: Adding to iommu group 11 May 17 01:43:44.027180 kernel: pci 0000:00:1c.3: Adding to iommu group 12 May 17 01:43:44.027227 kernel: pci 0000:00:1e.0: Adding to iommu group 13 May 17 01:43:44.027276 kernel: pci 0000:00:1f.0: Adding to iommu group 14 May 17 01:43:44.027361 kernel: pci 0000:00:1f.4: Adding to iommu group 14 May 17 01:43:44.027408 kernel: pci 0000:00:1f.5: Adding to iommu group 14 May 17 01:43:44.027456 kernel: pci 0000:01:00.0: Adding to iommu group 1 May 17 01:43:44.027504 kernel: pci 0000:01:00.1: Adding to iommu group 1 May 17 01:43:44.027551 kernel: pci 0000:03:00.0: Adding to iommu group 15 May 17 01:43:44.027600 kernel: pci 0000:04:00.0: Adding to iommu group 16 May 17 01:43:44.027648 kernel: pci 0000:06:00.0: Adding to iommu group 17 May 17 01:43:44.027699 kernel: pci 0000:07:00.0: Adding to iommu group 17 May 17 01:43:44.027708 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O May 17 01:43:44.027715 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 17 01:43:44.027721 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) May 17 01:43:44.027727 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer May 17 01:43:44.027732 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules May 17 01:43:44.027738 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules May 17 01:43:44.027744 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules May 17 01:43:44.027793 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) May 17 01:43:44.027803 kernel: Initialise system trusted keyrings May 17 01:43:44.027809 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 May 17 01:43:44.027815 kernel: Key type asymmetric registered May 17 01:43:44.027820 kernel: Asymmetric key parser 'x509' registered May 17 01:43:44.027826 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 17 01:43:44.027831 kernel: io scheduler mq-deadline registered May 17 01:43:44.027837 kernel: io scheduler kyber registered May 17 01:43:44.027843 kernel: io scheduler bfq registered May 17 01:43:44.027889 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 May 17 01:43:44.027938 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 May 17 01:43:44.027986 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 May 17 01:43:44.028033 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 May 17 01:43:44.028079 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 May 17 01:43:44.028127 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 May 17 01:43:44.028180 kernel: thermal LNXTHERM:00: registered as thermal_zone0 May 17 01:43:44.028189 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) May 17 01:43:44.028195 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. May 17 01:43:44.028202 kernel: pstore: Using crash dump compression: deflate May 17 01:43:44.028208 kernel: pstore: Registered erst as persistent store backend May 17 01:43:44.028213 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 01:43:44.028219 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 01:43:44.028225 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 01:43:44.028230 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A May 17 01:43:44.028236 kernel: hpet_acpi_add: no address or irqs in _CRS May 17 01:43:44.028303 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) May 17 01:43:44.028327 kernel: i8042: PNP: No PS/2 controller found. May 17 01:43:44.028369 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 May 17 01:43:44.028413 kernel: rtc_cmos rtc_cmos: registered as rtc0 May 17 01:43:44.028455 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-05-17T01:43:42 UTC (1747446222) May 17 01:43:44.028500 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram May 17 01:43:44.028508 kernel: intel_pstate: Intel P-state driver initializing May 17 01:43:44.028514 kernel: intel_pstate: Disabling energy efficiency optimization May 17 01:43:44.028520 kernel: intel_pstate: HWP enabled May 17 01:43:44.028527 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 May 17 01:43:44.028533 kernel: vesafb: scrolling: redraw May 17 01:43:44.028538 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 May 17 01:43:44.028544 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x0000000064e17301, using 768k, total 768k May 17 01:43:44.028550 kernel: Console: switching to colour frame buffer device 128x48 May 17 01:43:44.028556 kernel: fb0: VESA VGA frame buffer device May 17 01:43:44.028561 kernel: NET: Registered PF_INET6 protocol family May 17 01:43:44.028567 kernel: Segment Routing with IPv6 May 17 01:43:44.028573 kernel: In-situ OAM (IOAM) with IPv6 May 17 01:43:44.028579 kernel: NET: Registered PF_PACKET protocol family May 17 01:43:44.028585 kernel: Key type dns_resolver registered May 17 01:43:44.028591 kernel: microcode: Current revision: 0x000000fc May 17 01:43:44.028596 kernel: microcode: Updated early from: 0x000000f4 May 17 01:43:44.028602 kernel: microcode: Microcode Update Driver: v2.2. May 17 01:43:44.028608 kernel: IPI shorthand broadcast: enabled May 17 01:43:44.028613 kernel: sched_clock: Marking stable (2489047632, 1378350655)->(4398188128, -530789841) May 17 01:43:44.028619 kernel: registered taskstats version 1 May 17 01:43:44.028625 kernel: Loading compiled-in X.509 certificates May 17 01:43:44.028631 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 85b8d1234ceca483cb3defc2030d93f7792663c9' May 17 01:43:44.028637 kernel: Key type .fscrypt registered May 17 01:43:44.028643 kernel: Key type fscrypt-provisioning registered May 17 01:43:44.028648 kernel: ima: Allocated hash algorithm: sha1 May 17 01:43:44.028654 kernel: ima: No architecture policies found May 17 01:43:44.028660 kernel: clk: Disabling unused clocks May 17 01:43:44.028665 kernel: Freeing unused kernel image (initmem) memory: 42872K May 17 01:43:44.028671 kernel: Write protecting the kernel read-only data: 36864k May 17 01:43:44.028677 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 17 01:43:44.028683 kernel: Run /init as init process May 17 01:43:44.028689 kernel: with arguments: May 17 01:43:44.028695 kernel: /init May 17 01:43:44.028700 kernel: with environment: May 17 01:43:44.028706 kernel: HOME=/ May 17 01:43:44.028711 kernel: TERM=linux May 17 01:43:44.028717 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 01:43:44.028724 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 01:43:44.028732 systemd[1]: Detected architecture x86-64. May 17 01:43:44.028738 systemd[1]: Running in initrd. May 17 01:43:44.028744 systemd[1]: No hostname configured, using default hostname. May 17 01:43:44.028750 systemd[1]: Hostname set to . May 17 01:43:44.028755 systemd[1]: Initializing machine ID from random generator. May 17 01:43:44.028762 systemd[1]: Queued start job for default target initrd.target. May 17 01:43:44.028767 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 01:43:44.028773 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 01:43:44.028781 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 01:43:44.028787 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 01:43:44.028793 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 01:43:44.028799 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 01:43:44.028805 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 01:43:44.028812 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz May 17 01:43:44.028818 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns May 17 01:43:44.028824 kernel: clocksource: Switched to clocksource tsc May 17 01:43:44.028830 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 01:43:44.028836 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 01:43:44.028842 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 01:43:44.028848 systemd[1]: Reached target paths.target - Path Units. May 17 01:43:44.028854 systemd[1]: Reached target slices.target - Slice Units. May 17 01:43:44.028860 systemd[1]: Reached target swap.target - Swaps. May 17 01:43:44.028866 systemd[1]: Reached target timers.target - Timer Units. May 17 01:43:44.028873 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 01:43:44.028879 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 01:43:44.028885 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 01:43:44.028891 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 01:43:44.028897 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 01:43:44.028903 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 01:43:44.028909 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 01:43:44.028915 systemd[1]: Reached target sockets.target - Socket Units. May 17 01:43:44.028921 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 01:43:44.028928 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 01:43:44.028934 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 01:43:44.028940 systemd[1]: Starting systemd-fsck-usr.service... May 17 01:43:44.028945 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 01:43:44.028961 systemd-journald[264]: Collecting audit messages is disabled. May 17 01:43:44.028976 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 01:43:44.028983 systemd-journald[264]: Journal started May 17 01:43:44.028996 systemd-journald[264]: Runtime Journal (/run/log/journal/9802662725254ac6b084597311b63347) is 8.0M, max 639.9M, 631.9M free. May 17 01:43:44.052250 systemd-modules-load[265]: Inserted module 'overlay' May 17 01:43:44.074284 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 01:43:44.095328 systemd[1]: Started systemd-journald.service - Journal Service. May 17 01:43:44.104693 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 01:43:44.104996 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 01:43:44.105099 systemd[1]: Finished systemd-fsck-usr.service. May 17 01:43:44.105931 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 01:43:44.106397 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 01:43:44.148380 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 01:43:44.166044 systemd-modules-load[265]: Inserted module 'br_netfilter' May 17 01:43:44.207634 kernel: Bridge firewalling registered May 17 01:43:44.166470 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 01:43:44.224835 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 01:43:44.245812 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 01:43:44.268019 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 01:43:44.307597 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 01:43:44.308103 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 01:43:44.308530 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 01:43:44.313532 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 01:43:44.313674 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 01:43:44.314869 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 01:43:44.321517 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 01:43:44.322176 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 01:43:44.337453 systemd-resolved[299]: Positive Trust Anchors: May 17 01:43:44.337459 systemd-resolved[299]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 01:43:44.337495 systemd-resolved[299]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 01:43:44.339772 systemd-resolved[299]: Defaulting to hostname 'linux'. May 17 01:43:44.342626 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 01:43:44.371658 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 01:43:44.479500 dracut-cmdline[305]: dracut-dracut-053 May 17 01:43:44.479500 dracut-cmdline[305]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 01:43:44.631320 kernel: SCSI subsystem initialized May 17 01:43:44.653311 kernel: Loading iSCSI transport class v2.0-870. May 17 01:43:44.676336 kernel: iscsi: registered transport (tcp) May 17 01:43:44.707679 kernel: iscsi: registered transport (qla4xxx) May 17 01:43:44.707695 kernel: QLogic iSCSI HBA Driver May 17 01:43:44.740577 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 01:43:44.755531 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 01:43:44.838705 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 01:43:44.838734 kernel: device-mapper: uevent: version 1.0.3 May 17 01:43:44.858331 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 17 01:43:44.917347 kernel: raid6: avx2x4 gen() 52378 MB/s May 17 01:43:44.949304 kernel: raid6: avx2x2 gen() 54011 MB/s May 17 01:43:44.985661 kernel: raid6: avx2x1 gen() 45282 MB/s May 17 01:43:44.985678 kernel: raid6: using algorithm avx2x2 gen() 54011 MB/s May 17 01:43:45.032698 kernel: raid6: .... xor() 31476 MB/s, rmw enabled May 17 01:43:45.032715 kernel: raid6: using avx2x2 recovery algorithm May 17 01:43:45.073330 kernel: xor: automatically using best checksumming function avx May 17 01:43:45.187310 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 01:43:45.193149 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 01:43:45.218620 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 01:43:45.225435 systemd-udevd[490]: Using default interface naming scheme 'v255'. May 17 01:43:45.227868 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 01:43:45.266523 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 01:43:45.311800 dracut-pre-trigger[502]: rd.md=0: removing MD RAID activation May 17 01:43:45.331922 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 01:43:45.348560 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 01:43:45.462402 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 01:43:45.492430 kernel: cryptd: max_cpu_qlen set to 1000 May 17 01:43:45.492446 kernel: pps_core: LinuxPPS API ver. 1 registered May 17 01:43:45.493230 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 01:43:45.522190 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 17 01:43:45.525480 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 01:43:45.630292 kernel: PTP clock support registered May 17 01:43:45.630324 kernel: libata version 3.00 loaded. May 17 01:43:45.630345 kernel: AVX2 version of gcm_enc/dec engaged. May 17 01:43:45.630357 kernel: ACPI: bus type USB registered May 17 01:43:45.630368 kernel: usbcore: registered new interface driver usbfs May 17 01:43:45.630379 kernel: usbcore: registered new interface driver hub May 17 01:43:45.630389 kernel: usbcore: registered new device driver usb May 17 01:43:45.630410 kernel: AES CTR mode by8 optimization enabled May 17 01:43:45.525598 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 01:43:45.636279 kernel: ahci 0000:00:17.0: version 3.0 May 17 01:43:45.650322 kernel: igb: Intel(R) Gigabit Ethernet Network Driver May 17 01:43:45.650350 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode May 17 01:43:45.650453 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. May 17 01:43:45.664379 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 01:43:46.288391 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst May 17 01:43:46.288489 kernel: igb 0000:03:00.0: added PHC on eth0 May 17 01:43:46.288567 kernel: scsi host0: ahci May 17 01:43:46.288631 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection May 17 01:43:46.288698 kernel: scsi host1: ahci May 17 01:43:46.288761 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller May 17 01:43:46.288822 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 May 17 01:43:46.288882 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 May 17 01:43:46.288940 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller May 17 01:43:46.288998 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 May 17 01:43:46.289056 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed May 17 01:43:46.289115 kernel: hub 1-0:1.0: USB hub found May 17 01:43:46.289188 kernel: hub 1-0:1.0: 16 ports detected May 17 01:43:46.289253 kernel: hub 2-0:1.0: USB hub found May 17 01:43:46.289326 kernel: hub 2-0:1.0: 10 ports detected May 17 01:43:46.289390 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:31:f8 May 17 01:43:46.289452 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 May 17 01:43:46.289514 kernel: scsi host2: ahci May 17 01:43:46.289577 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) May 17 01:43:46.289639 kernel: scsi host3: ahci May 17 01:43:46.289697 kernel: igb 0000:04:00.0: added PHC on eth1 May 17 01:43:46.289761 kernel: scsi host4: ahci May 17 01:43:46.289820 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection May 17 01:43:46.289882 kernel: scsi host5: ahci May 17 01:43:46.289939 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:31:f9 May 17 01:43:46.290002 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 May 17 01:43:46.290063 kernel: scsi host6: ahci May 17 01:43:46.290120 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) May 17 01:43:46.290181 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd May 17 01:43:46.290282 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 May 17 01:43:46.290290 kernel: hub 1-14:1.0: USB hub found May 17 01:43:46.290363 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 May 17 01:43:46.290373 kernel: hub 1-14:1.0: 4 ports detected May 17 01:43:46.290438 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 May 17 01:43:46.290447 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 May 17 01:43:46.290454 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 May 17 01:43:46.290460 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 May 17 01:43:46.290467 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 May 17 01:43:46.290474 kernel: mlx5_core 0000:01:00.0: firmware version: 14.29.2002 May 17 01:43:46.290540 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) May 17 01:43:46.270171 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 01:43:46.270361 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 01:43:46.325355 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 01:43:46.353469 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 01:43:46.363640 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 01:43:46.374981 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 01:43:46.375005 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 01:43:46.375028 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 01:43:46.375473 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 01:43:46.420328 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd May 17 01:43:46.463688 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 01:43:46.474488 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 01:43:46.525278 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) May 17 01:43:46.525302 kernel: ata3: SATA link down (SStatus 0 SControl 300) May 17 01:43:46.540276 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) May 17 01:43:46.540381 kernel: ata7: SATA link down (SStatus 0 SControl 300) May 17 01:43:46.561352 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged May 17 01:43:46.561537 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 17 01:43:46.561547 kernel: hid: raw HID events driver (C) Jiri Kosina May 17 01:43:46.600372 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 01:43:46.721390 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 17 01:43:46.721401 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) May 17 01:43:46.721412 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 17 01:43:46.721420 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 May 17 01:43:46.721427 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 May 17 01:43:46.721434 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA May 17 01:43:46.702620 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 01:43:46.755315 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA May 17 01:43:46.755327 kernel: ata1.00: Features: NCQ-prio May 17 01:43:46.772439 kernel: ata2.00: Features: NCQ-prio May 17 01:43:46.785277 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 17 01:43:46.785366 kernel: ata1.00: configured for UDMA/133 May 17 01:43:46.789314 kernel: mlx5_core 0000:01:00.1: firmware version: 14.29.2002 May 17 01:43:46.789402 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 May 17 01:43:46.790305 kernel: ata2.00: configured for UDMA/133 May 17 01:43:46.803146 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) May 17 01:43:46.808327 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 May 17 01:43:46.908278 kernel: igb 0000:03:00.0 eno1: renamed from eth0 May 17 01:43:46.929543 kernel: usbcore: registered new interface driver usbhid May 17 01:43:46.929564 kernel: usbhid: USB HID core driver May 17 01:43:46.957277 kernel: igb 0000:04:00.0 eno2: renamed from eth1 May 17 01:43:46.957420 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 May 17 01:43:47.009520 kernel: ata2.00: Enabling discard_zeroes_data May 17 01:43:47.009546 kernel: ata1.00: Enabling discard_zeroes_data May 17 01:43:47.009561 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) May 17 01:43:47.029190 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) May 17 01:43:47.029311 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks May 17 01:43:47.029409 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks May 17 01:43:47.035275 kernel: sd 1:0:0:0: [sda] Write Protect is off May 17 01:43:47.040276 kernel: sd 0:0:0:0: [sdb] Write Protect is off May 17 01:43:47.042322 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 May 17 01:43:47.042444 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 May 17 01:43:47.042462 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 May 17 01:43:47.049257 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 May 17 01:43:47.049369 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 17 01:43:47.049463 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 May 17 01:43:47.061313 kernel: sd 1:0:0:0: [sda] Preferred minimum I/O size 4096 bytes May 17 01:43:47.070278 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) May 17 01:43:47.073276 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 17 01:43:47.076311 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged May 17 01:43:47.084280 kernel: ata2.00: Enabling discard_zeroes_data May 17 01:43:47.284584 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 17 01:43:47.284674 kernel: sd 0:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes May 17 01:43:47.285343 kernel: sd 1:0:0:0: [sda] Attached SCSI disk May 17 01:43:47.387343 kernel: ata1.00: Enabling discard_zeroes_data May 17 01:43:47.429591 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 01:43:47.429611 kernel: GPT:9289727 != 937703087 May 17 01:43:47.447234 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 01:43:47.462578 kernel: GPT:9289727 != 937703087 May 17 01:43:47.479392 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 01:43:47.496028 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 May 17 01:43:47.508323 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk May 17 01:43:47.546319 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth0 May 17 01:43:47.555491 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. May 17 01:43:47.574369 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sdb6 scanned by (udev-worker) (561) May 17 01:43:47.587328 kernel: BTRFS: device fsid 7f88d479-6686-439c-8052-b96f0a9d77bc devid 1 transid 38 /dev/sdb3 scanned by (udev-worker) (543) May 17 01:43:47.587379 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth2 May 17 01:43:47.618965 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. May 17 01:43:47.637562 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. May 17 01:43:47.677997 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. May 17 01:43:47.689432 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. May 17 01:43:47.728508 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 01:43:47.767383 kernel: ata1.00: Enabling discard_zeroes_data May 17 01:43:47.767400 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 May 17 01:43:47.767450 disk-uuid[721]: Primary Header is updated. May 17 01:43:47.767450 disk-uuid[721]: Secondary Entries is updated. May 17 01:43:47.767450 disk-uuid[721]: Secondary Header is updated. May 17 01:43:47.821354 kernel: ata1.00: Enabling discard_zeroes_data May 17 01:43:47.821367 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 May 17 01:43:47.821375 kernel: ata1.00: Enabling discard_zeroes_data May 17 01:43:47.848278 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 May 17 01:43:48.828447 kernel: ata1.00: Enabling discard_zeroes_data May 17 01:43:48.848252 disk-uuid[722]: The operation has completed successfully. May 17 01:43:48.856542 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 May 17 01:43:48.882139 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 01:43:48.882204 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 01:43:48.915519 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 01:43:48.952400 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 17 01:43:48.952466 sh[740]: Success May 17 01:43:48.982692 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 01:43:49.007318 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 01:43:49.015574 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 01:43:49.085718 kernel: BTRFS info (device dm-0): first mount of filesystem 7f88d479-6686-439c-8052-b96f0a9d77bc May 17 01:43:49.085746 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 17 01:43:49.106859 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 17 01:43:49.125872 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 17 01:43:49.144026 kernel: BTRFS info (device dm-0): using free space tree May 17 01:43:49.182309 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 17 01:43:49.185042 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 01:43:49.193797 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 01:43:49.201524 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 01:43:49.309600 kernel: BTRFS info (device sdb6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 01:43:49.309684 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm May 17 01:43:49.309692 kernel: BTRFS info (device sdb6): using free space tree May 17 01:43:49.309703 kernel: BTRFS info (device sdb6): enabling ssd optimizations May 17 01:43:49.309710 kernel: BTRFS info (device sdb6): auto enabling async discard May 17 01:43:49.242215 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 01:43:49.345400 kernel: BTRFS info (device sdb6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 01:43:49.344592 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 01:43:49.355931 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 01:43:49.396567 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 01:43:49.417394 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 01:43:49.431296 systemd-networkd[924]: lo: Link UP May 17 01:43:49.431299 systemd-networkd[924]: lo: Gained carrier May 17 01:43:49.433716 systemd-networkd[924]: Enumeration completed May 17 01:43:49.449806 ignition[832]: Ignition 2.19.0 May 17 01:43:49.434496 systemd-networkd[924]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 01:43:49.449810 ignition[832]: Stage: fetch-offline May 17 01:43:49.439443 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 01:43:49.449833 ignition[832]: no configs at "/usr/lib/ignition/base.d" May 17 01:43:49.446541 systemd[1]: Reached target network.target - Network. May 17 01:43:49.449839 ignition[832]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 01:43:49.451853 unknown[832]: fetched base config from "system" May 17 01:43:49.449892 ignition[832]: parsed url from cmdline: "" May 17 01:43:49.451858 unknown[832]: fetched user config from "system" May 17 01:43:49.449893 ignition[832]: no config URL provided May 17 01:43:49.461132 systemd-networkd[924]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 01:43:49.449896 ignition[832]: reading system config file "/usr/lib/ignition/user.ign" May 17 01:43:49.475625 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 01:43:49.449919 ignition[832]: parsing config with SHA512: bafd7f1f57a5bc102d1027f29361cb8eb1f72c89622b4165d721234f81f1e0c80a025fba630786861350a34c23a6a2b3b90730753d9c8e3ce05d533f4febe512 May 17 01:43:49.489974 systemd-networkd[924]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 01:43:49.452071 ignition[832]: fetch-offline: fetch-offline passed May 17 01:43:49.502869 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 17 01:43:49.452073 ignition[832]: POST message to Packet Timeline May 17 01:43:49.515593 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 01:43:49.452076 ignition[832]: POST Status error: resource requires networking May 17 01:43:49.684401 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up May 17 01:43:49.675555 systemd-networkd[924]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 01:43:49.452111 ignition[832]: Ignition finished successfully May 17 01:43:49.526985 ignition[936]: Ignition 2.19.0 May 17 01:43:49.526988 ignition[936]: Stage: kargs May 17 01:43:49.527088 ignition[936]: no configs at "/usr/lib/ignition/base.d" May 17 01:43:49.527095 ignition[936]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 01:43:49.527587 ignition[936]: kargs: kargs passed May 17 01:43:49.527590 ignition[936]: POST message to Packet Timeline May 17 01:43:49.527599 ignition[936]: GET https://metadata.packet.net/metadata: attempt #1 May 17 01:43:49.527967 ignition[936]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:52923->[::1]:53: read: connection refused May 17 01:43:49.728801 ignition[936]: GET https://metadata.packet.net/metadata: attempt #2 May 17 01:43:49.729830 ignition[936]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:35098->[::1]:53: read: connection refused May 17 01:43:49.899366 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up May 17 01:43:49.900665 systemd-networkd[924]: eno1: Link UP May 17 01:43:49.900887 systemd-networkd[924]: eno2: Link UP May 17 01:43:49.901088 systemd-networkd[924]: enp1s0f0np0: Link UP May 17 01:43:49.901338 systemd-networkd[924]: enp1s0f0np0: Gained carrier May 17 01:43:49.915547 systemd-networkd[924]: enp1s0f1np1: Link UP May 17 01:43:49.957527 systemd-networkd[924]: enp1s0f0np0: DHCPv4 address 145.40.90.165/31, gateway 145.40.90.164 acquired from 145.40.83.140 May 17 01:43:50.130403 ignition[936]: GET https://metadata.packet.net/metadata: attempt #3 May 17 01:43:50.131381 ignition[936]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:60695->[::1]:53: read: connection refused May 17 01:43:50.688041 systemd-networkd[924]: enp1s0f1np1: Gained carrier May 17 01:43:50.931877 ignition[936]: GET https://metadata.packet.net/metadata: attempt #4 May 17 01:43:50.932857 ignition[936]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:54254->[::1]:53: read: connection refused May 17 01:43:51.647876 systemd-networkd[924]: enp1s0f0np0: Gained IPv6LL May 17 01:43:52.415881 systemd-networkd[924]: enp1s0f1np1: Gained IPv6LL May 17 01:43:52.534554 ignition[936]: GET https://metadata.packet.net/metadata: attempt #5 May 17 01:43:52.535657 ignition[936]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:60888->[::1]:53: read: connection refused May 17 01:43:55.739115 ignition[936]: GET https://metadata.packet.net/metadata: attempt #6 May 17 01:43:56.740773 ignition[936]: GET result: OK May 17 01:43:57.089823 ignition[936]: Ignition finished successfully May 17 01:43:57.094842 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 01:43:57.125529 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 01:43:57.131938 ignition[955]: Ignition 2.19.0 May 17 01:43:57.131943 ignition[955]: Stage: disks May 17 01:43:57.132059 ignition[955]: no configs at "/usr/lib/ignition/base.d" May 17 01:43:57.132066 ignition[955]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 01:43:57.132632 ignition[955]: disks: disks passed May 17 01:43:57.132635 ignition[955]: POST message to Packet Timeline May 17 01:43:57.132644 ignition[955]: GET https://metadata.packet.net/metadata: attempt #1 May 17 01:43:58.231026 ignition[955]: GET result: OK May 17 01:43:58.753819 ignition[955]: Ignition finished successfully May 17 01:43:58.757537 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 01:43:58.772730 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 01:43:58.790523 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 01:43:58.812509 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 01:43:58.833662 systemd[1]: Reached target sysinit.target - System Initialization. May 17 01:43:58.854565 systemd[1]: Reached target basic.target - Basic System. May 17 01:43:58.887551 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 01:43:58.919118 systemd-fsck[974]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 17 01:43:58.930847 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 01:43:58.952449 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 01:43:59.047276 kernel: EXT4-fs (sdb9): mounted filesystem 278698a4-82b6-49b4-b6df-f7999ed4e35e r/w with ordered data mode. Quota mode: none. May 17 01:43:59.047529 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 01:43:59.057759 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 01:43:59.074577 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 01:43:59.100249 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 01:43:59.148334 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sdb6 scanned by mount (984) May 17 01:43:59.148349 kernel: BTRFS info (device sdb6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 01:43:59.116700 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 17 01:43:59.220406 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm May 17 01:43:59.220418 kernel: BTRFS info (device sdb6): using free space tree May 17 01:43:59.220426 kernel: BTRFS info (device sdb6): enabling ssd optimizations May 17 01:43:59.220433 kernel: BTRFS info (device sdb6): auto enabling async discard May 17 01:43:59.226393 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... May 17 01:43:59.240643 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 01:43:59.240667 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 01:43:59.319496 coreos-metadata[986]: May 17 01:43:59.294 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 01:43:59.264223 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 01:43:59.351391 coreos-metadata[1002]: May 17 01:43:59.294 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 01:43:59.282584 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 01:43:59.324518 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 01:43:59.382399 initrd-setup-root[1016]: cut: /sysroot/etc/passwd: No such file or directory May 17 01:43:59.392384 initrd-setup-root[1023]: cut: /sysroot/etc/group: No such file or directory May 17 01:43:59.402366 initrd-setup-root[1030]: cut: /sysroot/etc/shadow: No such file or directory May 17 01:43:59.412535 initrd-setup-root[1037]: cut: /sysroot/etc/gshadow: No such file or directory May 17 01:43:59.406922 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 01:43:59.427578 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 01:43:59.449848 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 01:43:59.489324 kernel: BTRFS info (device sdb6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 01:43:59.480916 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 01:43:59.490383 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 01:43:59.516544 ignition[1104]: INFO : Ignition 2.19.0 May 17 01:43:59.516544 ignition[1104]: INFO : Stage: mount May 17 01:43:59.516544 ignition[1104]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 01:43:59.516544 ignition[1104]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 01:43:59.516544 ignition[1104]: INFO : mount: mount passed May 17 01:43:59.516544 ignition[1104]: INFO : POST message to Packet Timeline May 17 01:43:59.516544 ignition[1104]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 17 01:44:00.308482 coreos-metadata[1002]: May 17 01:44:00.308 INFO Fetch successful May 17 01:44:00.377279 ignition[1104]: INFO : GET result: OK May 17 01:44:00.389112 systemd[1]: flatcar-static-network.service: Deactivated successfully. May 17 01:44:00.389182 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. May 17 01:44:00.430039 coreos-metadata[986]: May 17 01:44:00.429 INFO Fetch successful May 17 01:44:00.461547 coreos-metadata[986]: May 17 01:44:00.461 INFO wrote hostname ci-4081.3.3-n-d569167b40 to /sysroot/etc/hostname May 17 01:44:00.462847 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 01:44:00.770422 ignition[1104]: INFO : Ignition finished successfully May 17 01:44:00.773100 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 01:44:00.803496 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 01:44:00.814587 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 01:44:00.873897 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sdb6 scanned by mount (1127) May 17 01:44:00.873915 kernel: BTRFS info (device sdb6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 01:44:00.892757 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm May 17 01:44:00.909493 kernel: BTRFS info (device sdb6): using free space tree May 17 01:44:00.946134 kernel: BTRFS info (device sdb6): enabling ssd optimizations May 17 01:44:00.946150 kernel: BTRFS info (device sdb6): auto enabling async discard May 17 01:44:00.958559 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 01:44:00.986833 ignition[1144]: INFO : Ignition 2.19.0 May 17 01:44:00.986833 ignition[1144]: INFO : Stage: files May 17 01:44:01.001458 ignition[1144]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 01:44:01.001458 ignition[1144]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 01:44:01.001458 ignition[1144]: DEBUG : files: compiled without relabeling support, skipping May 17 01:44:01.001458 ignition[1144]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 01:44:01.001458 ignition[1144]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 01:44:01.001458 ignition[1144]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 01:44:01.001458 ignition[1144]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 01:44:01.001458 ignition[1144]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 01:44:01.001458 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 17 01:44:01.001458 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 17 01:44:00.991230 unknown[1144]: wrote ssh authorized keys file for user: core May 17 01:44:01.132516 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 17 01:44:01.166166 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 17 01:44:01.166166 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 17 01:44:01.198479 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 17 01:44:01.198479 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 01:44:01.198479 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 01:44:01.198479 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 01:44:01.198479 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 01:44:01.198479 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 01:44:01.198479 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 01:44:01.198479 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 01:44:01.198479 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 01:44:01.198479 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 01:44:01.198479 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 01:44:01.198479 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 01:44:01.198479 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 May 17 01:44:01.946059 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 17 01:44:02.409168 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 01:44:02.409168 ignition[1144]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 17 01:44:02.439510 ignition[1144]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 01:44:02.439510 ignition[1144]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 01:44:02.439510 ignition[1144]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 17 01:44:02.439510 ignition[1144]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 17 01:44:02.439510 ignition[1144]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 17 01:44:02.439510 ignition[1144]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 01:44:02.439510 ignition[1144]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 01:44:02.439510 ignition[1144]: INFO : files: files passed May 17 01:44:02.439510 ignition[1144]: INFO : POST message to Packet Timeline May 17 01:44:02.439510 ignition[1144]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 17 01:44:03.412995 ignition[1144]: INFO : GET result: OK May 17 01:44:03.814691 ignition[1144]: INFO : Ignition finished successfully May 17 01:44:03.817112 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 01:44:03.848591 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 01:44:03.859868 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 01:44:03.880609 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 01:44:03.880682 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 01:44:03.933567 initrd-setup-root-after-ignition[1181]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 01:44:03.933567 initrd-setup-root-after-ignition[1181]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 01:44:03.903809 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 01:44:03.995498 initrd-setup-root-after-ignition[1185]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 01:44:03.925584 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 01:44:03.958517 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 01:44:04.033397 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 01:44:04.033682 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 01:44:04.054934 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 01:44:04.074592 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 01:44:04.094753 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 01:44:04.108668 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 01:44:04.180094 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 01:44:04.211925 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 01:44:04.226797 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 01:44:04.246572 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 01:44:04.267660 systemd[1]: Stopped target timers.target - Timer Units. May 17 01:44:04.286969 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 01:44:04.287357 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 01:44:04.314937 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 01:44:04.335660 systemd[1]: Stopped target basic.target - Basic System. May 17 01:44:04.353869 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 01:44:04.372925 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 01:44:04.393991 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 01:44:04.415077 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 01:44:04.435854 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 01:44:04.456899 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 01:44:04.478017 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 01:44:04.497883 systemd[1]: Stopped target swap.target - Swaps. May 17 01:44:04.516887 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 01:44:04.517318 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 01:44:04.542999 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 01:44:04.564019 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 01:44:04.584757 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 01:44:04.585145 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 01:44:04.608771 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 01:44:04.609171 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 01:44:04.641858 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 01:44:04.642331 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 01:44:04.663086 systemd[1]: Stopped target paths.target - Path Units. May 17 01:44:04.681746 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 01:44:04.682124 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 01:44:04.702895 systemd[1]: Stopped target slices.target - Slice Units. May 17 01:44:04.721997 systemd[1]: Stopped target sockets.target - Socket Units. May 17 01:44:04.740969 systemd[1]: iscsid.socket: Deactivated successfully. May 17 01:44:04.741293 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 01:44:04.760911 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 01:44:04.761213 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 01:44:04.783971 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 01:44:04.784396 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 01:44:04.911427 ignition[1206]: INFO : Ignition 2.19.0 May 17 01:44:04.911427 ignition[1206]: INFO : Stage: umount May 17 01:44:04.911427 ignition[1206]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 01:44:04.911427 ignition[1206]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 01:44:04.911427 ignition[1206]: INFO : umount: umount passed May 17 01:44:04.911427 ignition[1206]: INFO : POST message to Packet Timeline May 17 01:44:04.911427 ignition[1206]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 17 01:44:04.804968 systemd[1]: ignition-files.service: Deactivated successfully. May 17 01:44:04.805367 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 01:44:04.824968 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 17 01:44:04.825378 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 01:44:04.855548 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 01:44:04.862594 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 01:44:04.862707 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 01:44:04.903524 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 01:44:04.919501 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 01:44:04.919575 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 01:44:04.942613 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 01:44:04.942728 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 01:44:04.979848 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 01:44:04.981509 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 01:44:04.981746 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 01:44:05.001265 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 01:44:05.001542 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 01:44:05.864863 ignition[1206]: INFO : GET result: OK May 17 01:44:06.281481 ignition[1206]: INFO : Ignition finished successfully May 17 01:44:06.282949 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 01:44:06.283106 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 01:44:06.302668 systemd[1]: Stopped target network.target - Network. May 17 01:44:06.319571 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 01:44:06.319771 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 01:44:06.339659 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 01:44:06.339814 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 01:44:06.359687 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 01:44:06.359844 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 01:44:06.378665 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 01:44:06.378833 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 01:44:06.397668 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 01:44:06.397832 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 01:44:06.417061 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 01:44:06.427401 systemd-networkd[924]: enp1s0f0np0: DHCPv6 lease lost May 17 01:44:06.434496 systemd-networkd[924]: enp1s0f1np1: DHCPv6 lease lost May 17 01:44:06.434857 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 01:44:06.454331 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 01:44:06.454611 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 01:44:06.473700 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 01:44:06.474077 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 01:44:06.493898 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 01:44:06.494015 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 01:44:06.529466 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 01:44:06.549420 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 01:44:06.549462 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 01:44:06.567491 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 01:44:06.567555 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 01:44:06.585630 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 01:44:06.585783 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 01:44:06.605651 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 01:44:06.605816 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 01:44:06.626907 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 01:44:06.648541 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 01:44:06.648906 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 01:44:06.677951 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 01:44:06.677988 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 01:44:06.704410 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 01:44:06.704550 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 01:44:06.724601 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 01:44:06.724688 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 01:44:06.763477 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 01:44:06.763734 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 01:44:06.793467 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 01:44:06.793702 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 01:44:06.853588 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 01:44:06.865431 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 01:44:06.865608 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 01:44:06.876512 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 01:44:06.876537 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 01:44:07.093507 systemd-journald[264]: Received SIGTERM from PID 1 (systemd). May 17 01:44:06.905576 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 01:44:06.905619 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 01:44:06.925627 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 01:44:06.925676 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 01:44:06.952960 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 01:44:06.970475 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 01:44:07.025755 systemd[1]: Switching root. May 17 01:44:07.156443 systemd-journald[264]: Journal stopped May 17 01:43:44.015463 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri May 16 22:44:56 -00 2025 May 17 01:43:44.015477 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 01:43:44.015484 kernel: BIOS-provided physical RAM map: May 17 01:43:44.015488 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable May 17 01:43:44.015492 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved May 17 01:43:44.015496 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved May 17 01:43:44.015501 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable May 17 01:43:44.015505 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved May 17 01:43:44.015509 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b1dfff] usable May 17 01:43:44.015513 kernel: BIOS-e820: [mem 0x0000000081b1e000-0x0000000081b1efff] ACPI NVS May 17 01:43:44.015517 kernel: BIOS-e820: [mem 0x0000000081b1f000-0x0000000081b1ffff] reserved May 17 01:43:44.015522 kernel: BIOS-e820: [mem 0x0000000081b20000-0x000000008afccfff] usable May 17 01:43:44.015526 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved May 17 01:43:44.015531 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable May 17 01:43:44.015536 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS May 17 01:43:44.015541 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved May 17 01:43:44.015546 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable May 17 01:43:44.015551 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved May 17 01:43:44.015556 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 17 01:43:44.015560 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved May 17 01:43:44.015565 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved May 17 01:43:44.015570 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved May 17 01:43:44.015574 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved May 17 01:43:44.015579 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable May 17 01:43:44.015583 kernel: NX (Execute Disable) protection: active May 17 01:43:44.015588 kernel: APIC: Static calls initialized May 17 01:43:44.015593 kernel: SMBIOS 3.2.1 present. May 17 01:43:44.015597 kernel: DMI: Supermicro X11SCM-F/X11SCM-F, BIOS 1.9 09/16/2022 May 17 01:43:44.015603 kernel: tsc: Detected 3400.000 MHz processor May 17 01:43:44.015608 kernel: tsc: Detected 3399.906 MHz TSC May 17 01:43:44.015612 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 01:43:44.015618 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 01:43:44.015622 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 May 17 01:43:44.015627 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs May 17 01:43:44.015632 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 01:43:44.015637 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 May 17 01:43:44.015641 kernel: Using GB pages for direct mapping May 17 01:43:44.015647 kernel: ACPI: Early table checksum verification disabled May 17 01:43:44.015652 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) May 17 01:43:44.015657 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) May 17 01:43:44.015663 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) May 17 01:43:44.015669 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) May 17 01:43:44.015674 kernel: ACPI: FACS 0x000000008C66CF80 000040 May 17 01:43:44.015679 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) May 17 01:43:44.015685 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) May 17 01:43:44.015690 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) May 17 01:43:44.015695 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) May 17 01:43:44.015700 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) May 17 01:43:44.015705 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) May 17 01:43:44.015710 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) May 17 01:43:44.015715 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) May 17 01:43:44.015721 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) May 17 01:43:44.015726 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) May 17 01:43:44.015731 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) May 17 01:43:44.015736 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) May 17 01:43:44.015741 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) May 17 01:43:44.015746 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) May 17 01:43:44.015751 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) May 17 01:43:44.015756 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) May 17 01:43:44.015761 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) May 17 01:43:44.015767 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) May 17 01:43:44.015772 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) May 17 01:43:44.015777 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) May 17 01:43:44.015782 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) May 17 01:43:44.015787 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) May 17 01:43:44.015792 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) May 17 01:43:44.015797 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) May 17 01:43:44.015802 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) May 17 01:43:44.015808 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) May 17 01:43:44.015813 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) May 17 01:43:44.015818 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) May 17 01:43:44.015823 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] May 17 01:43:44.015828 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] May 17 01:43:44.015833 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] May 17 01:43:44.015838 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] May 17 01:43:44.015843 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] May 17 01:43:44.015848 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] May 17 01:43:44.015854 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] May 17 01:43:44.015858 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] May 17 01:43:44.015863 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] May 17 01:43:44.015868 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] May 17 01:43:44.015873 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] May 17 01:43:44.015878 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] May 17 01:43:44.015883 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] May 17 01:43:44.015888 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] May 17 01:43:44.015893 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] May 17 01:43:44.015899 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] May 17 01:43:44.015904 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] May 17 01:43:44.015909 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] May 17 01:43:44.015914 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] May 17 01:43:44.015919 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] May 17 01:43:44.015924 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] May 17 01:43:44.015929 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] May 17 01:43:44.015934 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] May 17 01:43:44.015939 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] May 17 01:43:44.015943 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] May 17 01:43:44.015949 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] May 17 01:43:44.015954 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] May 17 01:43:44.015959 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] May 17 01:43:44.015964 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] May 17 01:43:44.015969 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] May 17 01:43:44.015974 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] May 17 01:43:44.015979 kernel: No NUMA configuration found May 17 01:43:44.015984 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] May 17 01:43:44.015989 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] May 17 01:43:44.015995 kernel: Zone ranges: May 17 01:43:44.016000 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 01:43:44.016006 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 17 01:43:44.016011 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] May 17 01:43:44.016016 kernel: Movable zone start for each node May 17 01:43:44.016020 kernel: Early memory node ranges May 17 01:43:44.016025 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] May 17 01:43:44.016030 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] May 17 01:43:44.016035 kernel: node 0: [mem 0x0000000040400000-0x0000000081b1dfff] May 17 01:43:44.016041 kernel: node 0: [mem 0x0000000081b20000-0x000000008afccfff] May 17 01:43:44.016046 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] May 17 01:43:44.016051 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] May 17 01:43:44.016056 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] May 17 01:43:44.016065 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] May 17 01:43:44.016071 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 01:43:44.016076 kernel: On node 0, zone DMA: 103 pages in unavailable ranges May 17 01:43:44.016081 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges May 17 01:43:44.016087 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges May 17 01:43:44.016093 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges May 17 01:43:44.016098 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges May 17 01:43:44.016104 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges May 17 01:43:44.016109 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges May 17 01:43:44.016114 kernel: ACPI: PM-Timer IO Port: 0x1808 May 17 01:43:44.016120 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) May 17 01:43:44.016125 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) May 17 01:43:44.016130 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) May 17 01:43:44.016137 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) May 17 01:43:44.016142 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) May 17 01:43:44.016147 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) May 17 01:43:44.016153 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) May 17 01:43:44.016158 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) May 17 01:43:44.016163 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) May 17 01:43:44.016168 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) May 17 01:43:44.016174 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) May 17 01:43:44.016179 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) May 17 01:43:44.016185 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) May 17 01:43:44.016190 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) May 17 01:43:44.016196 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) May 17 01:43:44.016201 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) May 17 01:43:44.016206 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 May 17 01:43:44.016211 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 17 01:43:44.016217 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 01:43:44.016222 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 01:43:44.016228 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 17 01:43:44.016234 kernel: TSC deadline timer available May 17 01:43:44.016239 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs May 17 01:43:44.016244 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices May 17 01:43:44.016250 kernel: Booting paravirtualized kernel on bare hardware May 17 01:43:44.016255 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 01:43:44.016261 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 May 17 01:43:44.016266 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 May 17 01:43:44.016274 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 May 17 01:43:44.016280 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 May 17 01:43:44.016286 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 01:43:44.016313 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 01:43:44.016319 kernel: random: crng init done May 17 01:43:44.016324 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) May 17 01:43:44.016330 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) May 17 01:43:44.016335 kernel: Fallback order for Node 0: 0 May 17 01:43:44.016354 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 May 17 01:43:44.016359 kernel: Policy zone: Normal May 17 01:43:44.016366 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 01:43:44.016371 kernel: software IO TLB: area num 16. May 17 01:43:44.016377 kernel: Memory: 32720300K/33452980K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42872K init, 2320K bss, 732420K reserved, 0K cma-reserved) May 17 01:43:44.016382 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 May 17 01:43:44.016388 kernel: ftrace: allocating 37948 entries in 149 pages May 17 01:43:44.016393 kernel: ftrace: allocated 149 pages with 4 groups May 17 01:43:44.016398 kernel: Dynamic Preempt: voluntary May 17 01:43:44.016404 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 01:43:44.016409 kernel: rcu: RCU event tracing is enabled. May 17 01:43:44.016416 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. May 17 01:43:44.016421 kernel: Trampoline variant of Tasks RCU enabled. May 17 01:43:44.016427 kernel: Rude variant of Tasks RCU enabled. May 17 01:43:44.016432 kernel: Tracing variant of Tasks RCU enabled. May 17 01:43:44.016437 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 01:43:44.016443 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 May 17 01:43:44.016448 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 May 17 01:43:44.016453 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 01:43:44.016459 kernel: Console: colour dummy device 80x25 May 17 01:43:44.016464 kernel: printk: console [tty0] enabled May 17 01:43:44.016470 kernel: printk: console [ttyS1] enabled May 17 01:43:44.016476 kernel: ACPI: Core revision 20230628 May 17 01:43:44.016481 kernel: hpet: HPET dysfunctional in PC10. Force disabled. May 17 01:43:44.016487 kernel: APIC: Switch to symmetric I/O mode setup May 17 01:43:44.016492 kernel: DMAR: Host address width 39 May 17 01:43:44.016497 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 May 17 01:43:44.016503 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da May 17 01:43:44.016508 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff May 17 01:43:44.016513 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 May 17 01:43:44.016520 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 May 17 01:43:44.016525 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. May 17 01:43:44.016531 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode May 17 01:43:44.016536 kernel: x2apic enabled May 17 01:43:44.016542 kernel: APIC: Switched APIC routing to: cluster x2apic May 17 01:43:44.016547 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns May 17 01:43:44.016553 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) May 17 01:43:44.016558 kernel: CPU0: Thermal monitoring enabled (TM1) May 17 01:43:44.016563 kernel: process: using mwait in idle threads May 17 01:43:44.016570 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 17 01:43:44.016575 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 17 01:43:44.016580 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 01:43:44.016585 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit May 17 01:43:44.016591 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall May 17 01:43:44.016596 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS May 17 01:43:44.016601 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT May 17 01:43:44.016607 kernel: RETBleed: Mitigation: Enhanced IBRS May 17 01:43:44.016612 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 17 01:43:44.016617 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 17 01:43:44.016622 kernel: TAA: Mitigation: TSX disabled May 17 01:43:44.016629 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers May 17 01:43:44.016634 kernel: SRBDS: Mitigation: Microcode May 17 01:43:44.016639 kernel: GDS: Mitigation: Microcode May 17 01:43:44.016645 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 01:43:44.016650 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 01:43:44.016655 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 01:43:44.016661 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' May 17 01:43:44.016666 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' May 17 01:43:44.016671 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 01:43:44.016677 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 May 17 01:43:44.016682 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 May 17 01:43:44.016688 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. May 17 01:43:44.016694 kernel: Freeing SMP alternatives memory: 32K May 17 01:43:44.016699 kernel: pid_max: default: 32768 minimum: 301 May 17 01:43:44.016704 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 17 01:43:44.016709 kernel: landlock: Up and running. May 17 01:43:44.016715 kernel: SELinux: Initializing. May 17 01:43:44.016720 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 01:43:44.016725 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 01:43:44.016731 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) May 17 01:43:44.016736 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. May 17 01:43:44.016742 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. May 17 01:43:44.016748 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. May 17 01:43:44.016754 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. May 17 01:43:44.016759 kernel: ... version: 4 May 17 01:43:44.016764 kernel: ... bit width: 48 May 17 01:43:44.016770 kernel: ... generic registers: 4 May 17 01:43:44.016775 kernel: ... value mask: 0000ffffffffffff May 17 01:43:44.016780 kernel: ... max period: 00007fffffffffff May 17 01:43:44.016786 kernel: ... fixed-purpose events: 3 May 17 01:43:44.016791 kernel: ... event mask: 000000070000000f May 17 01:43:44.016797 kernel: signal: max sigframe size: 2032 May 17 01:43:44.016803 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 May 17 01:43:44.016808 kernel: rcu: Hierarchical SRCU implementation. May 17 01:43:44.016813 kernel: rcu: Max phase no-delay instances is 400. May 17 01:43:44.016819 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. May 17 01:43:44.016824 kernel: smp: Bringing up secondary CPUs ... May 17 01:43:44.016829 kernel: smpboot: x86: Booting SMP configuration: May 17 01:43:44.016835 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 May 17 01:43:44.016841 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. May 17 01:43:44.016847 kernel: smp: Brought up 1 node, 16 CPUs May 17 01:43:44.016852 kernel: smpboot: Max logical packages: 1 May 17 01:43:44.016858 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) May 17 01:43:44.016863 kernel: devtmpfs: initialized May 17 01:43:44.016868 kernel: x86/mm: Memory block size: 128MB May 17 01:43:44.016874 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b1e000-0x81b1efff] (4096 bytes) May 17 01:43:44.016879 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) May 17 01:43:44.016885 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 01:43:44.016891 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) May 17 01:43:44.016896 kernel: pinctrl core: initialized pinctrl subsystem May 17 01:43:44.016901 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 01:43:44.016907 kernel: audit: initializing netlink subsys (disabled) May 17 01:43:44.016912 kernel: audit: type=2000 audit(1747446218.039:1): state=initialized audit_enabled=0 res=1 May 17 01:43:44.016917 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 01:43:44.016923 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 01:43:44.016928 kernel: cpuidle: using governor menu May 17 01:43:44.016933 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 01:43:44.016955 kernel: dca service started, version 1.12.1 May 17 01:43:44.016961 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) May 17 01:43:44.016966 kernel: PCI: Using configuration type 1 for base access May 17 01:43:44.016985 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' May 17 01:43:44.016991 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 01:43:44.016996 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 17 01:43:44.017001 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 17 01:43:44.017007 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 01:43:44.017012 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 17 01:43:44.017018 kernel: ACPI: Added _OSI(Module Device) May 17 01:43:44.017024 kernel: ACPI: Added _OSI(Processor Device) May 17 01:43:44.017029 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 01:43:44.017034 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 01:43:44.017040 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded May 17 01:43:44.017045 kernel: ACPI: Dynamic OEM Table Load: May 17 01:43:44.017050 kernel: ACPI: SSDT 0xFFFF900E40E5A000 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) May 17 01:43:44.017056 kernel: ACPI: Dynamic OEM Table Load: May 17 01:43:44.017061 kernel: ACPI: SSDT 0xFFFF900E41E2C800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) May 17 01:43:44.017067 kernel: ACPI: Dynamic OEM Table Load: May 17 01:43:44.017073 kernel: ACPI: SSDT 0xFFFF900E40E04A00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) May 17 01:43:44.017078 kernel: ACPI: Dynamic OEM Table Load: May 17 01:43:44.017083 kernel: ACPI: SSDT 0xFFFF900E41582000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) May 17 01:43:44.017089 kernel: ACPI: Dynamic OEM Table Load: May 17 01:43:44.017094 kernel: ACPI: SSDT 0xFFFF900E42448000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) May 17 01:43:44.017099 kernel: ACPI: Dynamic OEM Table Load: May 17 01:43:44.017105 kernel: ACPI: SSDT 0xFFFF900E42453400 00030A (v02 PmRef ApCst 00003000 INTL 20160527) May 17 01:43:44.017110 kernel: ACPI: _OSC evaluated successfully for all CPUs May 17 01:43:44.017115 kernel: ACPI: Interpreter enabled May 17 01:43:44.017122 kernel: ACPI: PM: (supports S0 S5) May 17 01:43:44.017127 kernel: ACPI: Using IOAPIC for interrupt routing May 17 01:43:44.017132 kernel: HEST: Enabling Firmware First mode for corrected errors. May 17 01:43:44.017138 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. May 17 01:43:44.017143 kernel: HEST: Table parsing has been initialized. May 17 01:43:44.017148 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. May 17 01:43:44.017154 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 01:43:44.017159 kernel: PCI: Using E820 reservations for host bridge windows May 17 01:43:44.017164 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F May 17 01:43:44.017171 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource May 17 01:43:44.017176 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource May 17 01:43:44.017182 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource May 17 01:43:44.017187 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource May 17 01:43:44.017192 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource May 17 01:43:44.017198 kernel: ACPI: \_TZ_.FN00: New power resource May 17 01:43:44.017203 kernel: ACPI: \_TZ_.FN01: New power resource May 17 01:43:44.017209 kernel: ACPI: \_TZ_.FN02: New power resource May 17 01:43:44.017214 kernel: ACPI: \_TZ_.FN03: New power resource May 17 01:43:44.017220 kernel: ACPI: \_TZ_.FN04: New power resource May 17 01:43:44.017226 kernel: ACPI: \PIN_: New power resource May 17 01:43:44.017231 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) May 17 01:43:44.017340 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 01:43:44.017394 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] May 17 01:43:44.017441 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] May 17 01:43:44.017449 kernel: PCI host bridge to bus 0000:00 May 17 01:43:44.017499 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 17 01:43:44.017542 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 17 01:43:44.017584 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 17 01:43:44.017625 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] May 17 01:43:44.017666 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] May 17 01:43:44.017707 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] May 17 01:43:44.017766 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 May 17 01:43:44.017821 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 May 17 01:43:44.017870 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold May 17 01:43:44.017922 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 May 17 01:43:44.017969 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] May 17 01:43:44.018020 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 May 17 01:43:44.018066 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] May 17 01:43:44.018121 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 May 17 01:43:44.018167 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] May 17 01:43:44.018214 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold May 17 01:43:44.018264 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 May 17 01:43:44.018355 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] May 17 01:43:44.018401 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] May 17 01:43:44.018456 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 May 17 01:43:44.018504 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] May 17 01:43:44.018554 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 May 17 01:43:44.018601 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] May 17 01:43:44.018650 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 May 17 01:43:44.018697 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] May 17 01:43:44.018744 kernel: pci 0000:00:16.0: PME# supported from D3hot May 17 01:43:44.018797 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 May 17 01:43:44.018843 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] May 17 01:43:44.018898 kernel: pci 0000:00:16.1: PME# supported from D3hot May 17 01:43:44.018948 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 May 17 01:43:44.018996 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] May 17 01:43:44.019042 kernel: pci 0000:00:16.4: PME# supported from D3hot May 17 01:43:44.019094 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 May 17 01:43:44.019141 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] May 17 01:43:44.019188 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] May 17 01:43:44.019234 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] May 17 01:43:44.019285 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] May 17 01:43:44.019332 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] May 17 01:43:44.019379 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] May 17 01:43:44.019428 kernel: pci 0000:00:17.0: PME# supported from D3hot May 17 01:43:44.019483 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 May 17 01:43:44.019534 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold May 17 01:43:44.019585 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 May 17 01:43:44.019636 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold May 17 01:43:44.019686 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 May 17 01:43:44.019734 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold May 17 01:43:44.019786 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 May 17 01:43:44.019834 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold May 17 01:43:44.019884 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 May 17 01:43:44.019934 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold May 17 01:43:44.019984 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 May 17 01:43:44.020032 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] May 17 01:43:44.020083 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 May 17 01:43:44.020135 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 May 17 01:43:44.020183 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] May 17 01:43:44.020231 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] May 17 01:43:44.020315 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 May 17 01:43:44.020365 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] May 17 01:43:44.020418 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 May 17 01:43:44.020466 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] May 17 01:43:44.020515 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] May 17 01:43:44.020565 kernel: pci 0000:01:00.0: PME# supported from D3cold May 17 01:43:44.020613 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] May 17 01:43:44.020661 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) May 17 01:43:44.020715 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 May 17 01:43:44.020763 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] May 17 01:43:44.020812 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] May 17 01:43:44.020861 kernel: pci 0000:01:00.1: PME# supported from D3cold May 17 01:43:44.020910 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] May 17 01:43:44.020959 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) May 17 01:43:44.021007 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 17 01:43:44.021054 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] May 17 01:43:44.021101 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] May 17 01:43:44.021149 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] May 17 01:43:44.021201 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect May 17 01:43:44.021251 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 May 17 01:43:44.021307 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] May 17 01:43:44.021355 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] May 17 01:43:44.021404 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] May 17 01:43:44.021453 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold May 17 01:43:44.021501 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] May 17 01:43:44.021548 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] May 17 01:43:44.021596 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] May 17 01:43:44.021651 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect May 17 01:43:44.021700 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 May 17 01:43:44.021749 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] May 17 01:43:44.021797 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] May 17 01:43:44.021846 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] May 17 01:43:44.021894 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold May 17 01:43:44.021942 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] May 17 01:43:44.021989 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] May 17 01:43:44.022038 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] May 17 01:43:44.022085 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] May 17 01:43:44.022142 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 May 17 01:43:44.022190 kernel: pci 0000:06:00.0: enabling Extended Tags May 17 01:43:44.022239 kernel: pci 0000:06:00.0: supports D1 D2 May 17 01:43:44.022291 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 17 01:43:44.022339 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] May 17 01:43:44.022390 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] May 17 01:43:44.022436 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] May 17 01:43:44.022489 kernel: pci_bus 0000:07: extended config space not accessible May 17 01:43:44.022545 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 May 17 01:43:44.022597 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] May 17 01:43:44.022648 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] May 17 01:43:44.022697 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] May 17 01:43:44.022750 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 17 01:43:44.022799 kernel: pci 0000:07:00.0: supports D1 D2 May 17 01:43:44.022852 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 17 01:43:44.022901 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] May 17 01:43:44.022949 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] May 17 01:43:44.022998 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] May 17 01:43:44.023006 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 May 17 01:43:44.023012 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 May 17 01:43:44.023020 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 May 17 01:43:44.023025 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 May 17 01:43:44.023031 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 May 17 01:43:44.023037 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 May 17 01:43:44.023042 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 May 17 01:43:44.023048 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 May 17 01:43:44.023054 kernel: iommu: Default domain type: Translated May 17 01:43:44.023059 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 01:43:44.023065 kernel: PCI: Using ACPI for IRQ routing May 17 01:43:44.023072 kernel: PCI: pci_cache_line_size set to 64 bytes May 17 01:43:44.023077 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] May 17 01:43:44.023083 kernel: e820: reserve RAM buffer [mem 0x81b1e000-0x83ffffff] May 17 01:43:44.023088 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] May 17 01:43:44.023094 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] May 17 01:43:44.023099 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] May 17 01:43:44.023105 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] May 17 01:43:44.023153 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device May 17 01:43:44.023203 kernel: pci 0000:07:00.0: vgaarb: bridge control possible May 17 01:43:44.023255 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 17 01:43:44.023263 kernel: vgaarb: loaded May 17 01:43:44.023269 kernel: clocksource: Switched to clocksource tsc-early May 17 01:43:44.023293 kernel: VFS: Disk quotas dquot_6.6.0 May 17 01:43:44.023299 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 01:43:44.023305 kernel: pnp: PnP ACPI init May 17 01:43:44.023355 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved May 17 01:43:44.023403 kernel: pnp 00:02: [dma 0 disabled] May 17 01:43:44.023457 kernel: pnp 00:03: [dma 0 disabled] May 17 01:43:44.023505 kernel: system 00:04: [io 0x0680-0x069f] has been reserved May 17 01:43:44.023551 kernel: system 00:04: [io 0x164e-0x164f] has been reserved May 17 01:43:44.023598 kernel: system 00:05: [io 0x1854-0x1857] has been reserved May 17 01:43:44.023645 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved May 17 01:43:44.023690 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved May 17 01:43:44.023735 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved May 17 01:43:44.023780 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved May 17 01:43:44.023825 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved May 17 01:43:44.023870 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved May 17 01:43:44.023913 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved May 17 01:43:44.023958 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved May 17 01:43:44.024005 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved May 17 01:43:44.024052 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved May 17 01:43:44.024094 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved May 17 01:43:44.024138 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved May 17 01:43:44.024181 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved May 17 01:43:44.024225 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved May 17 01:43:44.024269 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved May 17 01:43:44.024320 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved May 17 01:43:44.024346 kernel: pnp: PnP ACPI: found 10 devices May 17 01:43:44.024352 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 01:43:44.024358 kernel: NET: Registered PF_INET protocol family May 17 01:43:44.024364 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 01:43:44.024370 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) May 17 01:43:44.024376 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 01:43:44.024382 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 01:43:44.024388 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 17 01:43:44.024395 kernel: TCP: Hash tables configured (established 262144 bind 65536) May 17 01:43:44.024401 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) May 17 01:43:44.024406 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) May 17 01:43:44.024412 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 01:43:44.024418 kernel: NET: Registered PF_XDP protocol family May 17 01:43:44.024466 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] May 17 01:43:44.024512 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] May 17 01:43:44.024560 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] May 17 01:43:44.024608 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] May 17 01:43:44.024660 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] May 17 01:43:44.024708 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] May 17 01:43:44.024757 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] May 17 01:43:44.024804 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 17 01:43:44.024851 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] May 17 01:43:44.024898 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] May 17 01:43:44.024945 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] May 17 01:43:44.024994 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] May 17 01:43:44.025040 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] May 17 01:43:44.025089 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] May 17 01:43:44.025136 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] May 17 01:43:44.025183 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] May 17 01:43:44.025232 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] May 17 01:43:44.025283 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] May 17 01:43:44.025366 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] May 17 01:43:44.025415 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] May 17 01:43:44.025462 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] May 17 01:43:44.025511 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] May 17 01:43:44.025559 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] May 17 01:43:44.025606 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] May 17 01:43:44.025650 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc May 17 01:43:44.025693 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 17 01:43:44.025735 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 17 01:43:44.025776 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 17 01:43:44.025818 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] May 17 01:43:44.025858 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] May 17 01:43:44.025909 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] May 17 01:43:44.025952 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] May 17 01:43:44.026001 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] May 17 01:43:44.026044 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] May 17 01:43:44.026091 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] May 17 01:43:44.026134 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] May 17 01:43:44.026183 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] May 17 01:43:44.026226 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] May 17 01:43:44.026277 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] May 17 01:43:44.026358 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] May 17 01:43:44.026366 kernel: PCI: CLS 64 bytes, default 64 May 17 01:43:44.026372 kernel: DMAR: No ATSR found May 17 01:43:44.026378 kernel: DMAR: No SATC found May 17 01:43:44.026384 kernel: DMAR: dmar0: Using Queued invalidation May 17 01:43:44.026431 kernel: pci 0000:00:00.0: Adding to iommu group 0 May 17 01:43:44.026478 kernel: pci 0000:00:01.0: Adding to iommu group 1 May 17 01:43:44.026525 kernel: pci 0000:00:08.0: Adding to iommu group 2 May 17 01:43:44.026575 kernel: pci 0000:00:12.0: Adding to iommu group 3 May 17 01:43:44.026621 kernel: pci 0000:00:14.0: Adding to iommu group 4 May 17 01:43:44.026668 kernel: pci 0000:00:14.2: Adding to iommu group 4 May 17 01:43:44.026713 kernel: pci 0000:00:15.0: Adding to iommu group 5 May 17 01:43:44.026760 kernel: pci 0000:00:15.1: Adding to iommu group 5 May 17 01:43:44.026805 kernel: pci 0000:00:16.0: Adding to iommu group 6 May 17 01:43:44.026852 kernel: pci 0000:00:16.1: Adding to iommu group 6 May 17 01:43:44.026898 kernel: pci 0000:00:16.4: Adding to iommu group 6 May 17 01:43:44.026947 kernel: pci 0000:00:17.0: Adding to iommu group 7 May 17 01:43:44.026993 kernel: pci 0000:00:1b.0: Adding to iommu group 8 May 17 01:43:44.027040 kernel: pci 0000:00:1b.4: Adding to iommu group 9 May 17 01:43:44.027087 kernel: pci 0000:00:1b.5: Adding to iommu group 10 May 17 01:43:44.027134 kernel: pci 0000:00:1c.0: Adding to iommu group 11 May 17 01:43:44.027180 kernel: pci 0000:00:1c.3: Adding to iommu group 12 May 17 01:43:44.027227 kernel: pci 0000:00:1e.0: Adding to iommu group 13 May 17 01:43:44.027276 kernel: pci 0000:00:1f.0: Adding to iommu group 14 May 17 01:43:44.027361 kernel: pci 0000:00:1f.4: Adding to iommu group 14 May 17 01:43:44.027408 kernel: pci 0000:00:1f.5: Adding to iommu group 14 May 17 01:43:44.027456 kernel: pci 0000:01:00.0: Adding to iommu group 1 May 17 01:43:44.027504 kernel: pci 0000:01:00.1: Adding to iommu group 1 May 17 01:43:44.027551 kernel: pci 0000:03:00.0: Adding to iommu group 15 May 17 01:43:44.027600 kernel: pci 0000:04:00.0: Adding to iommu group 16 May 17 01:43:44.027648 kernel: pci 0000:06:00.0: Adding to iommu group 17 May 17 01:43:44.027699 kernel: pci 0000:07:00.0: Adding to iommu group 17 May 17 01:43:44.027708 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O May 17 01:43:44.027715 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 17 01:43:44.027721 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) May 17 01:43:44.027727 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer May 17 01:43:44.027732 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules May 17 01:43:44.027738 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules May 17 01:43:44.027744 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules May 17 01:43:44.027793 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) May 17 01:43:44.027803 kernel: Initialise system trusted keyrings May 17 01:43:44.027809 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 May 17 01:43:44.027815 kernel: Key type asymmetric registered May 17 01:43:44.027820 kernel: Asymmetric key parser 'x509' registered May 17 01:43:44.027826 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 17 01:43:44.027831 kernel: io scheduler mq-deadline registered May 17 01:43:44.027837 kernel: io scheduler kyber registered May 17 01:43:44.027843 kernel: io scheduler bfq registered May 17 01:43:44.027889 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 May 17 01:43:44.027938 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 May 17 01:43:44.027986 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 May 17 01:43:44.028033 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 May 17 01:43:44.028079 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 May 17 01:43:44.028127 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 May 17 01:43:44.028180 kernel: thermal LNXTHERM:00: registered as thermal_zone0 May 17 01:43:44.028189 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) May 17 01:43:44.028195 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. May 17 01:43:44.028202 kernel: pstore: Using crash dump compression: deflate May 17 01:43:44.028208 kernel: pstore: Registered erst as persistent store backend May 17 01:43:44.028213 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 01:43:44.028219 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 01:43:44.028225 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 01:43:44.028230 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A May 17 01:43:44.028236 kernel: hpet_acpi_add: no address or irqs in _CRS May 17 01:43:44.028303 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) May 17 01:43:44.028327 kernel: i8042: PNP: No PS/2 controller found. May 17 01:43:44.028369 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 May 17 01:43:44.028413 kernel: rtc_cmos rtc_cmos: registered as rtc0 May 17 01:43:44.028455 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-05-17T01:43:42 UTC (1747446222) May 17 01:43:44.028500 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram May 17 01:43:44.028508 kernel: intel_pstate: Intel P-state driver initializing May 17 01:43:44.028514 kernel: intel_pstate: Disabling energy efficiency optimization May 17 01:43:44.028520 kernel: intel_pstate: HWP enabled May 17 01:43:44.028527 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 May 17 01:43:44.028533 kernel: vesafb: scrolling: redraw May 17 01:43:44.028538 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 May 17 01:43:44.028544 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x0000000064e17301, using 768k, total 768k May 17 01:43:44.028550 kernel: Console: switching to colour frame buffer device 128x48 May 17 01:43:44.028556 kernel: fb0: VESA VGA frame buffer device May 17 01:43:44.028561 kernel: NET: Registered PF_INET6 protocol family May 17 01:43:44.028567 kernel: Segment Routing with IPv6 May 17 01:43:44.028573 kernel: In-situ OAM (IOAM) with IPv6 May 17 01:43:44.028579 kernel: NET: Registered PF_PACKET protocol family May 17 01:43:44.028585 kernel: Key type dns_resolver registered May 17 01:43:44.028591 kernel: microcode: Current revision: 0x000000fc May 17 01:43:44.028596 kernel: microcode: Updated early from: 0x000000f4 May 17 01:43:44.028602 kernel: microcode: Microcode Update Driver: v2.2. May 17 01:43:44.028608 kernel: IPI shorthand broadcast: enabled May 17 01:43:44.028613 kernel: sched_clock: Marking stable (2489047632, 1378350655)->(4398188128, -530789841) May 17 01:43:44.028619 kernel: registered taskstats version 1 May 17 01:43:44.028625 kernel: Loading compiled-in X.509 certificates May 17 01:43:44.028631 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 85b8d1234ceca483cb3defc2030d93f7792663c9' May 17 01:43:44.028637 kernel: Key type .fscrypt registered May 17 01:43:44.028643 kernel: Key type fscrypt-provisioning registered May 17 01:43:44.028648 kernel: ima: Allocated hash algorithm: sha1 May 17 01:43:44.028654 kernel: ima: No architecture policies found May 17 01:43:44.028660 kernel: clk: Disabling unused clocks May 17 01:43:44.028665 kernel: Freeing unused kernel image (initmem) memory: 42872K May 17 01:43:44.028671 kernel: Write protecting the kernel read-only data: 36864k May 17 01:43:44.028677 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 17 01:43:44.028683 kernel: Run /init as init process May 17 01:43:44.028689 kernel: with arguments: May 17 01:43:44.028695 kernel: /init May 17 01:43:44.028700 kernel: with environment: May 17 01:43:44.028706 kernel: HOME=/ May 17 01:43:44.028711 kernel: TERM=linux May 17 01:43:44.028717 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 01:43:44.028724 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 01:43:44.028732 systemd[1]: Detected architecture x86-64. May 17 01:43:44.028738 systemd[1]: Running in initrd. May 17 01:43:44.028744 systemd[1]: No hostname configured, using default hostname. May 17 01:43:44.028750 systemd[1]: Hostname set to . May 17 01:43:44.028755 systemd[1]: Initializing machine ID from random generator. May 17 01:43:44.028762 systemd[1]: Queued start job for default target initrd.target. May 17 01:43:44.028767 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 01:43:44.028773 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 01:43:44.028781 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 01:43:44.028787 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 01:43:44.028793 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 01:43:44.028799 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 01:43:44.028805 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 01:43:44.028812 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz May 17 01:43:44.028818 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns May 17 01:43:44.028824 kernel: clocksource: Switched to clocksource tsc May 17 01:43:44.028830 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 01:43:44.028836 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 01:43:44.028842 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 01:43:44.028848 systemd[1]: Reached target paths.target - Path Units. May 17 01:43:44.028854 systemd[1]: Reached target slices.target - Slice Units. May 17 01:43:44.028860 systemd[1]: Reached target swap.target - Swaps. May 17 01:43:44.028866 systemd[1]: Reached target timers.target - Timer Units. May 17 01:43:44.028873 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 01:43:44.028879 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 01:43:44.028885 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 01:43:44.028891 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 01:43:44.028897 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 01:43:44.028903 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 01:43:44.028909 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 01:43:44.028915 systemd[1]: Reached target sockets.target - Socket Units. May 17 01:43:44.028921 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 01:43:44.028928 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 01:43:44.028934 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 01:43:44.028940 systemd[1]: Starting systemd-fsck-usr.service... May 17 01:43:44.028945 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 01:43:44.028961 systemd-journald[264]: Collecting audit messages is disabled. May 17 01:43:44.028976 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 01:43:44.028983 systemd-journald[264]: Journal started May 17 01:43:44.028996 systemd-journald[264]: Runtime Journal (/run/log/journal/9802662725254ac6b084597311b63347) is 8.0M, max 639.9M, 631.9M free. May 17 01:43:44.052250 systemd-modules-load[265]: Inserted module 'overlay' May 17 01:43:44.074284 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 01:43:44.095328 systemd[1]: Started systemd-journald.service - Journal Service. May 17 01:43:44.104693 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 01:43:44.104996 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 01:43:44.105099 systemd[1]: Finished systemd-fsck-usr.service. May 17 01:43:44.105931 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 01:43:44.106397 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 01:43:44.148380 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 01:43:44.166044 systemd-modules-load[265]: Inserted module 'br_netfilter' May 17 01:43:44.207634 kernel: Bridge firewalling registered May 17 01:43:44.166470 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 01:43:44.224835 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 01:43:44.245812 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 01:43:44.268019 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 01:43:44.307597 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 01:43:44.308103 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 01:43:44.308530 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 01:43:44.313532 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 01:43:44.313674 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 01:43:44.314869 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 01:43:44.321517 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 01:43:44.322176 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 01:43:44.337453 systemd-resolved[299]: Positive Trust Anchors: May 17 01:43:44.337459 systemd-resolved[299]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 01:43:44.337495 systemd-resolved[299]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 01:43:44.339772 systemd-resolved[299]: Defaulting to hostname 'linux'. May 17 01:43:44.342626 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 01:43:44.371658 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 01:43:44.479500 dracut-cmdline[305]: dracut-dracut-053 May 17 01:43:44.479500 dracut-cmdline[305]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 01:43:44.631320 kernel: SCSI subsystem initialized May 17 01:43:44.653311 kernel: Loading iSCSI transport class v2.0-870. May 17 01:43:44.676336 kernel: iscsi: registered transport (tcp) May 17 01:43:44.707679 kernel: iscsi: registered transport (qla4xxx) May 17 01:43:44.707695 kernel: QLogic iSCSI HBA Driver May 17 01:43:44.740577 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 01:43:44.755531 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 01:43:44.838705 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 01:43:44.838734 kernel: device-mapper: uevent: version 1.0.3 May 17 01:43:44.858331 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 17 01:43:44.917347 kernel: raid6: avx2x4 gen() 52378 MB/s May 17 01:43:44.949304 kernel: raid6: avx2x2 gen() 54011 MB/s May 17 01:43:44.985661 kernel: raid6: avx2x1 gen() 45282 MB/s May 17 01:43:44.985678 kernel: raid6: using algorithm avx2x2 gen() 54011 MB/s May 17 01:43:45.032698 kernel: raid6: .... xor() 31476 MB/s, rmw enabled May 17 01:43:45.032715 kernel: raid6: using avx2x2 recovery algorithm May 17 01:43:45.073330 kernel: xor: automatically using best checksumming function avx May 17 01:43:45.187310 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 01:43:45.193149 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 01:43:45.218620 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 01:43:45.225435 systemd-udevd[490]: Using default interface naming scheme 'v255'. May 17 01:43:45.227868 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 01:43:45.266523 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 01:43:45.311800 dracut-pre-trigger[502]: rd.md=0: removing MD RAID activation May 17 01:43:45.331922 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 01:43:45.348560 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 01:43:45.462402 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 01:43:45.492430 kernel: cryptd: max_cpu_qlen set to 1000 May 17 01:43:45.492446 kernel: pps_core: LinuxPPS API ver. 1 registered May 17 01:43:45.493230 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 01:43:45.522190 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 17 01:43:45.525480 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 01:43:45.630292 kernel: PTP clock support registered May 17 01:43:45.630324 kernel: libata version 3.00 loaded. May 17 01:43:45.630345 kernel: AVX2 version of gcm_enc/dec engaged. May 17 01:43:45.630357 kernel: ACPI: bus type USB registered May 17 01:43:45.630368 kernel: usbcore: registered new interface driver usbfs May 17 01:43:45.630379 kernel: usbcore: registered new interface driver hub May 17 01:43:45.630389 kernel: usbcore: registered new device driver usb May 17 01:43:45.630410 kernel: AES CTR mode by8 optimization enabled May 17 01:43:45.525598 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 01:43:45.636279 kernel: ahci 0000:00:17.0: version 3.0 May 17 01:43:45.650322 kernel: igb: Intel(R) Gigabit Ethernet Network Driver May 17 01:43:45.650350 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode May 17 01:43:45.650453 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. May 17 01:43:45.664379 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 01:43:46.288391 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst May 17 01:43:46.288489 kernel: igb 0000:03:00.0: added PHC on eth0 May 17 01:43:46.288567 kernel: scsi host0: ahci May 17 01:43:46.288631 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection May 17 01:43:46.288698 kernel: scsi host1: ahci May 17 01:43:46.288761 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller May 17 01:43:46.288822 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 May 17 01:43:46.288882 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 May 17 01:43:46.288940 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller May 17 01:43:46.288998 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 May 17 01:43:46.289056 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed May 17 01:43:46.289115 kernel: hub 1-0:1.0: USB hub found May 17 01:43:46.289188 kernel: hub 1-0:1.0: 16 ports detected May 17 01:43:46.289253 kernel: hub 2-0:1.0: USB hub found May 17 01:43:46.289326 kernel: hub 2-0:1.0: 10 ports detected May 17 01:43:46.289390 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:31:f8 May 17 01:43:46.289452 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 May 17 01:43:46.289514 kernel: scsi host2: ahci May 17 01:43:46.289577 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) May 17 01:43:46.289639 kernel: scsi host3: ahci May 17 01:43:46.289697 kernel: igb 0000:04:00.0: added PHC on eth1 May 17 01:43:46.289761 kernel: scsi host4: ahci May 17 01:43:46.289820 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection May 17 01:43:46.289882 kernel: scsi host5: ahci May 17 01:43:46.289939 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:31:f9 May 17 01:43:46.290002 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 May 17 01:43:46.290063 kernel: scsi host6: ahci May 17 01:43:46.290120 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) May 17 01:43:46.290181 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd May 17 01:43:46.290282 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 May 17 01:43:46.290290 kernel: hub 1-14:1.0: USB hub found May 17 01:43:46.290363 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 May 17 01:43:46.290373 kernel: hub 1-14:1.0: 4 ports detected May 17 01:43:46.290438 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 May 17 01:43:46.290447 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 May 17 01:43:46.290454 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 May 17 01:43:46.290460 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 May 17 01:43:46.290467 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 May 17 01:43:46.290474 kernel: mlx5_core 0000:01:00.0: firmware version: 14.29.2002 May 17 01:43:46.290540 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) May 17 01:43:46.270171 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 01:43:46.270361 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 01:43:46.325355 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 01:43:46.353469 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 01:43:46.363640 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 01:43:46.374981 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 01:43:46.375005 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 01:43:46.375028 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 01:43:46.375473 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 01:43:46.420328 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd May 17 01:43:46.463688 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 01:43:46.474488 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 01:43:46.525278 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) May 17 01:43:46.525302 kernel: ata3: SATA link down (SStatus 0 SControl 300) May 17 01:43:46.540276 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) May 17 01:43:46.540381 kernel: ata7: SATA link down (SStatus 0 SControl 300) May 17 01:43:46.561352 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged May 17 01:43:46.561537 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 17 01:43:46.561547 kernel: hid: raw HID events driver (C) Jiri Kosina May 17 01:43:46.600372 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 01:43:46.721390 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 17 01:43:46.721401 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) May 17 01:43:46.721412 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 17 01:43:46.721420 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 May 17 01:43:46.721427 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 May 17 01:43:46.721434 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA May 17 01:43:46.702620 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 01:43:46.755315 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA May 17 01:43:46.755327 kernel: ata1.00: Features: NCQ-prio May 17 01:43:46.772439 kernel: ata2.00: Features: NCQ-prio May 17 01:43:46.785277 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 17 01:43:46.785366 kernel: ata1.00: configured for UDMA/133 May 17 01:43:46.789314 kernel: mlx5_core 0000:01:00.1: firmware version: 14.29.2002 May 17 01:43:46.789402 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 May 17 01:43:46.790305 kernel: ata2.00: configured for UDMA/133 May 17 01:43:46.803146 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) May 17 01:43:46.808327 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 May 17 01:43:46.908278 kernel: igb 0000:03:00.0 eno1: renamed from eth0 May 17 01:43:46.929543 kernel: usbcore: registered new interface driver usbhid May 17 01:43:46.929564 kernel: usbhid: USB HID core driver May 17 01:43:46.957277 kernel: igb 0000:04:00.0 eno2: renamed from eth1 May 17 01:43:46.957420 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 May 17 01:43:47.009520 kernel: ata2.00: Enabling discard_zeroes_data May 17 01:43:47.009546 kernel: ata1.00: Enabling discard_zeroes_data May 17 01:43:47.009561 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) May 17 01:43:47.029190 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) May 17 01:43:47.029311 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks May 17 01:43:47.029409 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks May 17 01:43:47.035275 kernel: sd 1:0:0:0: [sda] Write Protect is off May 17 01:43:47.040276 kernel: sd 0:0:0:0: [sdb] Write Protect is off May 17 01:43:47.042322 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 May 17 01:43:47.042444 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 May 17 01:43:47.042462 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 May 17 01:43:47.049257 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 May 17 01:43:47.049369 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 17 01:43:47.049463 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 May 17 01:43:47.061313 kernel: sd 1:0:0:0: [sda] Preferred minimum I/O size 4096 bytes May 17 01:43:47.070278 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) May 17 01:43:47.073276 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 17 01:43:47.076311 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged May 17 01:43:47.084280 kernel: ata2.00: Enabling discard_zeroes_data May 17 01:43:47.284584 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 17 01:43:47.284674 kernel: sd 0:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes May 17 01:43:47.285343 kernel: sd 1:0:0:0: [sda] Attached SCSI disk May 17 01:43:47.387343 kernel: ata1.00: Enabling discard_zeroes_data May 17 01:43:47.429591 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 01:43:47.429611 kernel: GPT:9289727 != 937703087 May 17 01:43:47.447234 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 01:43:47.462578 kernel: GPT:9289727 != 937703087 May 17 01:43:47.479392 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 01:43:47.496028 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 May 17 01:43:47.508323 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk May 17 01:43:47.546319 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth0 May 17 01:43:47.555491 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. May 17 01:43:47.574369 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sdb6 scanned by (udev-worker) (561) May 17 01:43:47.587328 kernel: BTRFS: device fsid 7f88d479-6686-439c-8052-b96f0a9d77bc devid 1 transid 38 /dev/sdb3 scanned by (udev-worker) (543) May 17 01:43:47.587379 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth2 May 17 01:43:47.618965 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. May 17 01:43:47.637562 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. May 17 01:43:47.677997 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. May 17 01:43:47.689432 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. May 17 01:43:47.728508 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 01:43:47.767383 kernel: ata1.00: Enabling discard_zeroes_data May 17 01:43:47.767400 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 May 17 01:43:47.767450 disk-uuid[721]: Primary Header is updated. May 17 01:43:47.767450 disk-uuid[721]: Secondary Entries is updated. May 17 01:43:47.767450 disk-uuid[721]: Secondary Header is updated. May 17 01:43:47.821354 kernel: ata1.00: Enabling discard_zeroes_data May 17 01:43:47.821367 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 May 17 01:43:47.821375 kernel: ata1.00: Enabling discard_zeroes_data May 17 01:43:47.848278 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 May 17 01:43:48.828447 kernel: ata1.00: Enabling discard_zeroes_data May 17 01:43:48.848252 disk-uuid[722]: The operation has completed successfully. May 17 01:43:48.856542 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 May 17 01:43:48.882139 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 01:43:48.882204 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 01:43:48.915519 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 01:43:48.952400 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 17 01:43:48.952466 sh[740]: Success May 17 01:43:48.982692 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 01:43:49.007318 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 01:43:49.015574 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 01:43:49.085718 kernel: BTRFS info (device dm-0): first mount of filesystem 7f88d479-6686-439c-8052-b96f0a9d77bc May 17 01:43:49.085746 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 17 01:43:49.106859 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 17 01:43:49.125872 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 17 01:43:49.144026 kernel: BTRFS info (device dm-0): using free space tree May 17 01:43:49.182309 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 17 01:43:49.185042 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 01:43:49.193797 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 01:43:49.201524 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 01:43:49.309600 kernel: BTRFS info (device sdb6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 01:43:49.309684 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm May 17 01:43:49.309692 kernel: BTRFS info (device sdb6): using free space tree May 17 01:43:49.309703 kernel: BTRFS info (device sdb6): enabling ssd optimizations May 17 01:43:49.309710 kernel: BTRFS info (device sdb6): auto enabling async discard May 17 01:43:49.242215 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 01:43:49.345400 kernel: BTRFS info (device sdb6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 01:43:49.344592 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 01:43:49.355931 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 01:43:49.396567 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 01:43:49.417394 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 01:43:49.431296 systemd-networkd[924]: lo: Link UP May 17 01:43:49.431299 systemd-networkd[924]: lo: Gained carrier May 17 01:43:49.433716 systemd-networkd[924]: Enumeration completed May 17 01:43:49.449806 ignition[832]: Ignition 2.19.0 May 17 01:43:49.434496 systemd-networkd[924]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 01:43:49.449810 ignition[832]: Stage: fetch-offline May 17 01:43:49.439443 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 01:43:49.449833 ignition[832]: no configs at "/usr/lib/ignition/base.d" May 17 01:43:49.446541 systemd[1]: Reached target network.target - Network. May 17 01:43:49.449839 ignition[832]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 01:43:49.451853 unknown[832]: fetched base config from "system" May 17 01:43:49.449892 ignition[832]: parsed url from cmdline: "" May 17 01:43:49.451858 unknown[832]: fetched user config from "system" May 17 01:43:49.449893 ignition[832]: no config URL provided May 17 01:43:49.461132 systemd-networkd[924]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 01:43:49.449896 ignition[832]: reading system config file "/usr/lib/ignition/user.ign" May 17 01:43:49.475625 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 01:43:49.449919 ignition[832]: parsing config with SHA512: bafd7f1f57a5bc102d1027f29361cb8eb1f72c89622b4165d721234f81f1e0c80a025fba630786861350a34c23a6a2b3b90730753d9c8e3ce05d533f4febe512 May 17 01:43:49.489974 systemd-networkd[924]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 01:43:49.452071 ignition[832]: fetch-offline: fetch-offline passed May 17 01:43:49.502869 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 17 01:43:49.452073 ignition[832]: POST message to Packet Timeline May 17 01:43:49.515593 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 01:43:49.452076 ignition[832]: POST Status error: resource requires networking May 17 01:43:49.684401 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up May 17 01:43:49.675555 systemd-networkd[924]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 01:43:49.452111 ignition[832]: Ignition finished successfully May 17 01:43:49.526985 ignition[936]: Ignition 2.19.0 May 17 01:43:49.526988 ignition[936]: Stage: kargs May 17 01:43:49.527088 ignition[936]: no configs at "/usr/lib/ignition/base.d" May 17 01:43:49.527095 ignition[936]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 01:43:49.527587 ignition[936]: kargs: kargs passed May 17 01:43:49.527590 ignition[936]: POST message to Packet Timeline May 17 01:43:49.527599 ignition[936]: GET https://metadata.packet.net/metadata: attempt #1 May 17 01:43:49.527967 ignition[936]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:52923->[::1]:53: read: connection refused May 17 01:43:49.728801 ignition[936]: GET https://metadata.packet.net/metadata: attempt #2 May 17 01:43:49.729830 ignition[936]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:35098->[::1]:53: read: connection refused May 17 01:43:49.899366 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up May 17 01:43:49.900665 systemd-networkd[924]: eno1: Link UP May 17 01:43:49.900887 systemd-networkd[924]: eno2: Link UP May 17 01:43:49.901088 systemd-networkd[924]: enp1s0f0np0: Link UP May 17 01:43:49.901338 systemd-networkd[924]: enp1s0f0np0: Gained carrier May 17 01:43:49.915547 systemd-networkd[924]: enp1s0f1np1: Link UP May 17 01:43:49.957527 systemd-networkd[924]: enp1s0f0np0: DHCPv4 address 145.40.90.165/31, gateway 145.40.90.164 acquired from 145.40.83.140 May 17 01:43:50.130403 ignition[936]: GET https://metadata.packet.net/metadata: attempt #3 May 17 01:43:50.131381 ignition[936]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:60695->[::1]:53: read: connection refused May 17 01:43:50.688041 systemd-networkd[924]: enp1s0f1np1: Gained carrier May 17 01:43:50.931877 ignition[936]: GET https://metadata.packet.net/metadata: attempt #4 May 17 01:43:50.932857 ignition[936]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:54254->[::1]:53: read: connection refused May 17 01:43:51.647876 systemd-networkd[924]: enp1s0f0np0: Gained IPv6LL May 17 01:43:52.415881 systemd-networkd[924]: enp1s0f1np1: Gained IPv6LL May 17 01:43:52.534554 ignition[936]: GET https://metadata.packet.net/metadata: attempt #5 May 17 01:43:52.535657 ignition[936]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:60888->[::1]:53: read: connection refused May 17 01:43:55.739115 ignition[936]: GET https://metadata.packet.net/metadata: attempt #6 May 17 01:43:56.740773 ignition[936]: GET result: OK May 17 01:43:57.089823 ignition[936]: Ignition finished successfully May 17 01:43:57.094842 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 01:43:57.125529 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 01:43:57.131938 ignition[955]: Ignition 2.19.0 May 17 01:43:57.131943 ignition[955]: Stage: disks May 17 01:43:57.132059 ignition[955]: no configs at "/usr/lib/ignition/base.d" May 17 01:43:57.132066 ignition[955]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 01:43:57.132632 ignition[955]: disks: disks passed May 17 01:43:57.132635 ignition[955]: POST message to Packet Timeline May 17 01:43:57.132644 ignition[955]: GET https://metadata.packet.net/metadata: attempt #1 May 17 01:43:58.231026 ignition[955]: GET result: OK May 17 01:43:58.753819 ignition[955]: Ignition finished successfully May 17 01:43:58.757537 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 01:43:58.772730 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 01:43:58.790523 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 01:43:58.812509 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 01:43:58.833662 systemd[1]: Reached target sysinit.target - System Initialization. May 17 01:43:58.854565 systemd[1]: Reached target basic.target - Basic System. May 17 01:43:58.887551 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 01:43:58.919118 systemd-fsck[974]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 17 01:43:58.930847 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 01:43:58.952449 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 01:43:59.047276 kernel: EXT4-fs (sdb9): mounted filesystem 278698a4-82b6-49b4-b6df-f7999ed4e35e r/w with ordered data mode. Quota mode: none. May 17 01:43:59.047529 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 01:43:59.057759 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 01:43:59.074577 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 01:43:59.100249 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 01:43:59.148334 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sdb6 scanned by mount (984) May 17 01:43:59.148349 kernel: BTRFS info (device sdb6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 01:43:59.116700 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 17 01:43:59.220406 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm May 17 01:43:59.220418 kernel: BTRFS info (device sdb6): using free space tree May 17 01:43:59.220426 kernel: BTRFS info (device sdb6): enabling ssd optimizations May 17 01:43:59.220433 kernel: BTRFS info (device sdb6): auto enabling async discard May 17 01:43:59.226393 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... May 17 01:43:59.240643 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 01:43:59.240667 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 01:43:59.319496 coreos-metadata[986]: May 17 01:43:59.294 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 01:43:59.264223 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 01:43:59.351391 coreos-metadata[1002]: May 17 01:43:59.294 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 01:43:59.282584 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 01:43:59.324518 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 01:43:59.382399 initrd-setup-root[1016]: cut: /sysroot/etc/passwd: No such file or directory May 17 01:43:59.392384 initrd-setup-root[1023]: cut: /sysroot/etc/group: No such file or directory May 17 01:43:59.402366 initrd-setup-root[1030]: cut: /sysroot/etc/shadow: No such file or directory May 17 01:43:59.412535 initrd-setup-root[1037]: cut: /sysroot/etc/gshadow: No such file or directory May 17 01:43:59.406922 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 01:43:59.427578 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 01:43:59.449848 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 01:43:59.489324 kernel: BTRFS info (device sdb6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 01:43:59.480916 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 01:43:59.490383 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 01:43:59.516544 ignition[1104]: INFO : Ignition 2.19.0 May 17 01:43:59.516544 ignition[1104]: INFO : Stage: mount May 17 01:43:59.516544 ignition[1104]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 01:43:59.516544 ignition[1104]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 01:43:59.516544 ignition[1104]: INFO : mount: mount passed May 17 01:43:59.516544 ignition[1104]: INFO : POST message to Packet Timeline May 17 01:43:59.516544 ignition[1104]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 17 01:44:00.308482 coreos-metadata[1002]: May 17 01:44:00.308 INFO Fetch successful May 17 01:44:00.377279 ignition[1104]: INFO : GET result: OK May 17 01:44:00.389112 systemd[1]: flatcar-static-network.service: Deactivated successfully. May 17 01:44:00.389182 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. May 17 01:44:00.430039 coreos-metadata[986]: May 17 01:44:00.429 INFO Fetch successful May 17 01:44:00.461547 coreos-metadata[986]: May 17 01:44:00.461 INFO wrote hostname ci-4081.3.3-n-d569167b40 to /sysroot/etc/hostname May 17 01:44:00.462847 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 01:44:00.770422 ignition[1104]: INFO : Ignition finished successfully May 17 01:44:00.773100 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 01:44:00.803496 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 01:44:00.814587 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 01:44:00.873897 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sdb6 scanned by mount (1127) May 17 01:44:00.873915 kernel: BTRFS info (device sdb6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 01:44:00.892757 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm May 17 01:44:00.909493 kernel: BTRFS info (device sdb6): using free space tree May 17 01:44:00.946134 kernel: BTRFS info (device sdb6): enabling ssd optimizations May 17 01:44:00.946150 kernel: BTRFS info (device sdb6): auto enabling async discard May 17 01:44:00.958559 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 01:44:00.986833 ignition[1144]: INFO : Ignition 2.19.0 May 17 01:44:00.986833 ignition[1144]: INFO : Stage: files May 17 01:44:01.001458 ignition[1144]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 01:44:01.001458 ignition[1144]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 01:44:01.001458 ignition[1144]: DEBUG : files: compiled without relabeling support, skipping May 17 01:44:01.001458 ignition[1144]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 01:44:01.001458 ignition[1144]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 01:44:01.001458 ignition[1144]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 01:44:01.001458 ignition[1144]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 01:44:01.001458 ignition[1144]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 01:44:01.001458 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 17 01:44:01.001458 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 17 01:44:00.991230 unknown[1144]: wrote ssh authorized keys file for user: core May 17 01:44:01.132516 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 17 01:44:01.166166 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 17 01:44:01.166166 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 17 01:44:01.198479 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 17 01:44:01.198479 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 01:44:01.198479 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 01:44:01.198479 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 01:44:01.198479 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 01:44:01.198479 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 01:44:01.198479 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 01:44:01.198479 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 01:44:01.198479 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 01:44:01.198479 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 01:44:01.198479 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 01:44:01.198479 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 01:44:01.198479 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 May 17 01:44:01.946059 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 17 01:44:02.409168 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 01:44:02.409168 ignition[1144]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 17 01:44:02.439510 ignition[1144]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 01:44:02.439510 ignition[1144]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 01:44:02.439510 ignition[1144]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 17 01:44:02.439510 ignition[1144]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 17 01:44:02.439510 ignition[1144]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 17 01:44:02.439510 ignition[1144]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 01:44:02.439510 ignition[1144]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 01:44:02.439510 ignition[1144]: INFO : files: files passed May 17 01:44:02.439510 ignition[1144]: INFO : POST message to Packet Timeline May 17 01:44:02.439510 ignition[1144]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 17 01:44:03.412995 ignition[1144]: INFO : GET result: OK May 17 01:44:03.814691 ignition[1144]: INFO : Ignition finished successfully May 17 01:44:03.817112 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 01:44:03.848591 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 01:44:03.859868 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 01:44:03.880609 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 01:44:03.880682 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 01:44:03.933567 initrd-setup-root-after-ignition[1181]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 01:44:03.933567 initrd-setup-root-after-ignition[1181]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 01:44:03.903809 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 01:44:03.995498 initrd-setup-root-after-ignition[1185]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 01:44:03.925584 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 01:44:03.958517 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 01:44:04.033397 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 01:44:04.033682 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 01:44:04.054934 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 01:44:04.074592 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 01:44:04.094753 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 01:44:04.108668 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 01:44:04.180094 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 01:44:04.211925 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 01:44:04.226797 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 01:44:04.246572 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 01:44:04.267660 systemd[1]: Stopped target timers.target - Timer Units. May 17 01:44:04.286969 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 01:44:04.287357 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 01:44:04.314937 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 01:44:04.335660 systemd[1]: Stopped target basic.target - Basic System. May 17 01:44:04.353869 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 01:44:04.372925 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 01:44:04.393991 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 01:44:04.415077 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 01:44:04.435854 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 01:44:04.456899 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 01:44:04.478017 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 01:44:04.497883 systemd[1]: Stopped target swap.target - Swaps. May 17 01:44:04.516887 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 01:44:04.517318 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 01:44:04.542999 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 01:44:04.564019 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 01:44:04.584757 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 01:44:04.585145 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 01:44:04.608771 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 01:44:04.609171 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 01:44:04.641858 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 01:44:04.642331 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 01:44:04.663086 systemd[1]: Stopped target paths.target - Path Units. May 17 01:44:04.681746 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 01:44:04.682124 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 01:44:04.702895 systemd[1]: Stopped target slices.target - Slice Units. May 17 01:44:04.721997 systemd[1]: Stopped target sockets.target - Socket Units. May 17 01:44:04.740969 systemd[1]: iscsid.socket: Deactivated successfully. May 17 01:44:04.741293 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 01:44:04.760911 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 01:44:04.761213 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 01:44:04.783971 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 01:44:04.784396 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 01:44:04.911427 ignition[1206]: INFO : Ignition 2.19.0 May 17 01:44:04.911427 ignition[1206]: INFO : Stage: umount May 17 01:44:04.911427 ignition[1206]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 01:44:04.911427 ignition[1206]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 01:44:04.911427 ignition[1206]: INFO : umount: umount passed May 17 01:44:04.911427 ignition[1206]: INFO : POST message to Packet Timeline May 17 01:44:04.911427 ignition[1206]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 17 01:44:04.804968 systemd[1]: ignition-files.service: Deactivated successfully. May 17 01:44:04.805367 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 01:44:04.824968 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 17 01:44:04.825378 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 01:44:04.855548 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 01:44:04.862594 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 01:44:04.862707 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 01:44:04.903524 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 01:44:04.919501 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 01:44:04.919575 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 01:44:04.942613 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 01:44:04.942728 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 01:44:04.979848 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 01:44:04.981509 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 01:44:04.981746 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 01:44:05.001265 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 01:44:05.001542 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 01:44:05.864863 ignition[1206]: INFO : GET result: OK May 17 01:44:06.281481 ignition[1206]: INFO : Ignition finished successfully May 17 01:44:06.282949 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 01:44:06.283106 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 01:44:06.302668 systemd[1]: Stopped target network.target - Network. May 17 01:44:06.319571 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 01:44:06.319771 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 01:44:06.339659 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 01:44:06.339814 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 01:44:06.359687 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 01:44:06.359844 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 01:44:06.378665 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 01:44:06.378833 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 01:44:06.397668 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 01:44:06.397832 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 01:44:06.417061 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 01:44:06.427401 systemd-networkd[924]: enp1s0f0np0: DHCPv6 lease lost May 17 01:44:06.434496 systemd-networkd[924]: enp1s0f1np1: DHCPv6 lease lost May 17 01:44:06.434857 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 01:44:06.454331 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 01:44:06.454611 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 01:44:06.473700 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 01:44:06.474077 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 01:44:06.493898 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 01:44:06.494015 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 01:44:06.529466 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 01:44:06.549420 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 01:44:06.549462 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 01:44:06.567491 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 01:44:06.567555 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 01:44:06.585630 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 01:44:06.585783 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 01:44:06.605651 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 01:44:06.605816 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 01:44:06.626907 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 01:44:06.648541 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 01:44:06.648906 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 01:44:06.677951 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 01:44:06.677988 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 01:44:06.704410 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 01:44:06.704550 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 01:44:06.724601 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 01:44:06.724688 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 01:44:06.763477 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 01:44:06.763734 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 01:44:06.793467 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 01:44:06.793702 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 01:44:06.853588 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 01:44:06.865431 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 01:44:06.865608 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 01:44:06.876512 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 01:44:06.876537 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 01:44:07.093507 systemd-journald[264]: Received SIGTERM from PID 1 (systemd). May 17 01:44:06.905576 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 01:44:06.905619 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 01:44:06.925627 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 01:44:06.925676 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 01:44:06.952960 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 01:44:06.970475 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 01:44:07.025755 systemd[1]: Switching root. May 17 01:44:07.156443 systemd-journald[264]: Journal stopped May 17 01:44:09.779143 kernel: SELinux: policy capability network_peer_controls=1 May 17 01:44:09.779158 kernel: SELinux: policy capability open_perms=1 May 17 01:44:09.779165 kernel: SELinux: policy capability extended_socket_class=1 May 17 01:44:09.779171 kernel: SELinux: policy capability always_check_network=0 May 17 01:44:09.779176 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 01:44:09.779181 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 01:44:09.779187 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 01:44:09.779193 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 01:44:09.779198 kernel: audit: type=1403 audit(1747446247.405:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 01:44:09.779204 systemd[1]: Successfully loaded SELinux policy in 156.175ms. May 17 01:44:09.779212 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.027ms. May 17 01:44:09.779219 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 01:44:09.779225 systemd[1]: Detected architecture x86-64. May 17 01:44:09.779231 systemd[1]: Detected first boot. May 17 01:44:09.779237 systemd[1]: Hostname set to . May 17 01:44:09.779245 systemd[1]: Initializing machine ID from random generator. May 17 01:44:09.779251 zram_generator::config[1259]: No configuration found. May 17 01:44:09.779258 systemd[1]: Populated /etc with preset unit settings. May 17 01:44:09.779264 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 17 01:44:09.779270 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 17 01:44:09.779280 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 17 01:44:09.779287 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 17 01:44:09.779294 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 17 01:44:09.779300 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 17 01:44:09.779307 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 17 01:44:09.779313 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 17 01:44:09.779320 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 17 01:44:09.779326 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 17 01:44:09.779333 systemd[1]: Created slice user.slice - User and Session Slice. May 17 01:44:09.779340 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 01:44:09.779347 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 01:44:09.779353 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 17 01:44:09.779360 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 17 01:44:09.779366 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 17 01:44:09.779372 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 01:44:09.779379 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... May 17 01:44:09.779385 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 01:44:09.779393 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 17 01:44:09.779399 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 17 01:44:09.779406 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 17 01:44:09.779413 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 17 01:44:09.779420 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 01:44:09.779427 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 01:44:09.779433 systemd[1]: Reached target slices.target - Slice Units. May 17 01:44:09.779441 systemd[1]: Reached target swap.target - Swaps. May 17 01:44:09.779448 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 17 01:44:09.779454 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 17 01:44:09.779461 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 01:44:09.779468 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 01:44:09.779474 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 01:44:09.779482 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 17 01:44:09.779489 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 17 01:44:09.779495 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 17 01:44:09.779502 systemd[1]: Mounting media.mount - External Media Directory... May 17 01:44:09.779509 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 01:44:09.779515 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 17 01:44:09.779522 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 17 01:44:09.779529 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 17 01:44:09.779536 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 01:44:09.779543 systemd[1]: Reached target machines.target - Containers. May 17 01:44:09.779550 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 17 01:44:09.779556 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 01:44:09.779563 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 01:44:09.779570 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 17 01:44:09.779576 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 01:44:09.779583 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 01:44:09.779591 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 01:44:09.779597 kernel: ACPI: bus type drm_connector registered May 17 01:44:09.779603 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 17 01:44:09.779610 kernel: fuse: init (API version 7.39) May 17 01:44:09.779616 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 01:44:09.779622 kernel: loop: module loaded May 17 01:44:09.779629 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 01:44:09.779636 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 17 01:44:09.779644 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 17 01:44:09.779650 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 17 01:44:09.779657 systemd[1]: Stopped systemd-fsck-usr.service. May 17 01:44:09.779664 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 01:44:09.779678 systemd-journald[1362]: Collecting audit messages is disabled. May 17 01:44:09.779694 systemd-journald[1362]: Journal started May 17 01:44:09.779708 systemd-journald[1362]: Runtime Journal (/run/log/journal/a3a87790ca5d46069e86f4dec6f0f566) is 8.0M, max 639.9M, 631.9M free. May 17 01:44:07.917522 systemd[1]: Queued start job for default target multi-user.target. May 17 01:44:07.931176 systemd[1]: Unnecessary job was removed for dev-sdb6.device - /dev/sdb6. May 17 01:44:07.931407 systemd[1]: systemd-journald.service: Deactivated successfully. May 17 01:44:09.807313 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 01:44:09.842347 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 17 01:44:09.875347 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 17 01:44:09.908316 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 01:44:09.941719 systemd[1]: verity-setup.service: Deactivated successfully. May 17 01:44:09.941747 systemd[1]: Stopped verity-setup.service. May 17 01:44:10.004316 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 01:44:10.025471 systemd[1]: Started systemd-journald.service - Journal Service. May 17 01:44:10.035847 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 17 01:44:10.045553 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 17 01:44:10.055549 systemd[1]: Mounted media.mount - External Media Directory. May 17 01:44:10.065526 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 17 01:44:10.075522 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 17 01:44:10.085503 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 17 01:44:10.095631 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 17 01:44:10.106725 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 01:44:10.117809 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 01:44:10.118011 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 17 01:44:10.130069 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 01:44:10.130436 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 01:44:10.142118 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 01:44:10.142497 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 01:44:10.153151 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 01:44:10.153516 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 01:44:10.165146 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 01:44:10.165514 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 17 01:44:10.177147 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 01:44:10.177504 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 01:44:10.189119 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 01:44:10.199103 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 17 01:44:10.211093 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 17 01:44:10.223104 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 01:44:10.242864 systemd[1]: Reached target network-pre.target - Preparation for Network. May 17 01:44:10.260469 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 17 01:44:10.271171 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 17 01:44:10.280444 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 01:44:10.280472 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 01:44:10.291529 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 17 01:44:10.322583 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 17 01:44:10.334296 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 17 01:44:10.344521 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 01:44:10.345993 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 17 01:44:10.355917 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 17 01:44:10.367374 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 01:44:10.368047 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 17 01:44:10.371598 systemd-journald[1362]: Time spent on flushing to /var/log/journal/a3a87790ca5d46069e86f4dec6f0f566 is 13.365ms for 1367 entries. May 17 01:44:10.371598 systemd-journald[1362]: System Journal (/var/log/journal/a3a87790ca5d46069e86f4dec6f0f566) is 8.0M, max 195.6M, 187.6M free. May 17 01:44:10.411036 systemd-journald[1362]: Received client request to flush runtime journal. May 17 01:44:10.386392 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 01:44:10.387043 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 01:44:10.394639 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 17 01:44:10.415984 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 17 01:44:10.443352 kernel: loop0: detected capacity change from 0 to 142488 May 17 01:44:10.482970 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 17 01:44:10.483276 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 01:44:10.494364 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 17 01:44:10.505503 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 17 01:44:10.516471 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 17 01:44:10.527483 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 17 01:44:10.543566 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 17 01:44:10.557331 kernel: loop1: detected capacity change from 0 to 140768 May 17 01:44:10.567506 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 01:44:10.577497 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 17 01:44:10.590179 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 17 01:44:10.618499 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 17 01:44:10.636111 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 01:44:10.639324 kernel: loop2: detected capacity change from 0 to 224512 May 17 01:44:10.639723 udevadm[1398]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 17 01:44:10.643339 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 01:44:10.643639 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 17 01:44:10.699283 systemd-tmpfiles[1411]: ACLs are not supported, ignoring. May 17 01:44:10.699294 systemd-tmpfiles[1411]: ACLs are not supported, ignoring. May 17 01:44:10.701581 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 01:44:10.713807 ldconfig[1388]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 01:44:10.718581 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 17 01:44:10.721321 kernel: loop3: detected capacity change from 0 to 8 May 17 01:44:10.779335 kernel: loop4: detected capacity change from 0 to 142488 May 17 01:44:10.794708 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 17 01:44:10.809323 kernel: loop5: detected capacity change from 0 to 140768 May 17 01:44:10.840278 kernel: loop6: detected capacity change from 0 to 224512 May 17 01:44:10.850428 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 01:44:10.862315 systemd-udevd[1420]: Using default interface naming scheme 'v255'. May 17 01:44:10.872279 kernel: loop7: detected capacity change from 0 to 8 May 17 01:44:10.872504 (sd-merge)[1418]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. May 17 01:44:10.872733 (sd-merge)[1418]: Merged extensions into '/usr'. May 17 01:44:10.874939 systemd[1]: Reloading requested from client PID 1393 ('systemd-sysext') (unit systemd-sysext.service)... May 17 01:44:10.874946 systemd[1]: Reloading... May 17 01:44:10.907157 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 May 17 01:44:10.907241 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 38 scanned by (udev-worker) (1492) May 17 01:44:10.907269 zram_generator::config[1518]: No configuration found. May 17 01:44:10.940109 kernel: ACPI: button: Sleep Button [SLPB] May 17 01:44:10.976293 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 17 01:44:10.976368 kernel: mousedev: PS/2 mouse device common for all mice May 17 01:44:11.002315 kernel: ACPI: button: Power Button [PWRF] May 17 01:44:11.031281 kernel: IPMI message handler: version 39.2 May 17 01:44:11.046913 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 01:44:11.073471 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set May 17 01:44:11.073690 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt May 17 01:44:11.103281 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) May 17 01:44:11.105278 kernel: ipmi device interface May 17 01:44:11.105323 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface May 17 01:44:11.105457 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface May 17 01:44:11.112159 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. May 17 01:44:11.112314 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. May 17 01:44:11.166608 systemd[1]: Reloading finished in 291 ms. May 17 01:44:11.173279 kernel: iTCO_vendor_support: vendor-support=0 May 17 01:44:11.173329 kernel: ipmi_si: IPMI System Interface driver May 17 01:44:11.206338 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS May 17 01:44:11.224322 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 May 17 01:44:11.235277 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine May 17 01:44:11.252278 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI May 17 01:44:11.268742 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 May 17 01:44:11.295719 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI May 17 01:44:11.311694 kernel: ipmi_si: Adding ACPI-specified kcs state machine May 17 01:44:11.322326 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 May 17 01:44:11.372890 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) May 17 01:44:11.373342 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) May 17 01:44:11.408730 kernel: intel_rapl_common: Found RAPL domain package May 17 01:44:11.408778 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. May 17 01:44:11.408889 kernel: intel_rapl_common: Found RAPL domain core May 17 01:44:11.434670 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) May 17 01:44:11.434772 kernel: intel_rapl_common: Found RAPL domain dram May 17 01:44:11.482277 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized May 17 01:44:11.488410 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 01:44:11.526601 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 17 01:44:11.528277 kernel: ipmi_ssif: IPMI SSIF Interface driver May 17 01:44:11.544267 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 17 01:44:11.578434 systemd[1]: Starting ensure-sysext.service... May 17 01:44:11.586857 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 17 01:44:11.612366 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 17 01:44:11.619613 lvm[1602]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 01:44:11.624244 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 01:44:11.624826 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 01:44:11.625408 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 01:44:11.643725 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 17 01:44:11.673453 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 17 01:44:11.681547 systemd-tmpfiles[1606]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 01:44:11.681774 systemd-tmpfiles[1606]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 17 01:44:11.682317 systemd-tmpfiles[1606]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 01:44:11.682496 systemd-tmpfiles[1606]: ACLs are not supported, ignoring. May 17 01:44:11.682535 systemd-tmpfiles[1606]: ACLs are not supported, ignoring. May 17 01:44:11.684228 systemd-tmpfiles[1606]: Detected autofs mount point /boot during canonicalization of boot. May 17 01:44:11.684232 systemd-tmpfiles[1606]: Skipping /boot May 17 01:44:11.684598 systemd[1]: Reloading requested from client PID 1601 ('systemctl') (unit ensure-sysext.service)... May 17 01:44:11.684605 systemd[1]: Reloading... May 17 01:44:11.688474 systemd-tmpfiles[1606]: Detected autofs mount point /boot during canonicalization of boot. May 17 01:44:11.688478 systemd-tmpfiles[1606]: Skipping /boot May 17 01:44:11.722277 zram_generator::config[1639]: No configuration found. May 17 01:44:11.775909 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 01:44:11.829694 systemd[1]: Reloading finished in 144 ms. May 17 01:44:11.850545 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 01:44:11.861565 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 01:44:11.875572 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 01:44:11.894473 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 01:44:11.906316 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 17 01:44:11.911901 augenrules[1716]: No rules May 17 01:44:11.927724 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 17 01:44:11.939104 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 17 01:44:11.941176 lvm[1721]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 01:44:11.951669 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 01:44:11.963017 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 17 01:44:11.975208 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 17 01:44:11.985970 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 01:44:11.995465 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 17 01:44:12.006638 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 17 01:44:12.017605 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 17 01:44:12.029641 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 17 01:44:12.041871 systemd-networkd[1605]: lo: Link UP May 17 01:44:12.041874 systemd-networkd[1605]: lo: Gained carrier May 17 01:44:12.044788 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 01:44:12.044926 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 01:44:12.045034 systemd-networkd[1605]: bond0: netdev ready May 17 01:44:12.046081 systemd-networkd[1605]: Enumeration completed May 17 01:44:12.058088 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 01:44:12.058683 systemd-networkd[1605]: enp1s0f0np0: Configuring with /etc/systemd/network/10-1c:34:da:6f:1c:f6.network. May 17 01:44:12.068100 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 01:44:12.079990 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 01:44:12.086756 systemd-resolved[1723]: Positive Trust Anchors: May 17 01:44:12.086762 systemd-resolved[1723]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 01:44:12.086785 systemd-resolved[1723]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 01:44:12.089420 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 01:44:12.089778 systemd-resolved[1723]: Using system hostname 'ci-4081.3.3-n-d569167b40'. May 17 01:44:12.090238 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 17 01:44:12.099382 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 01:44:12.099448 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 01:44:12.099929 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 01:44:12.109792 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 17 01:44:12.120596 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 01:44:12.120672 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 01:44:12.131693 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 01:44:12.131770 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 01:44:12.142630 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 01:44:12.142714 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 01:44:12.152777 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 17 01:44:12.167597 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 01:44:12.167854 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 01:44:12.190120 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 01:44:12.202581 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 01:44:12.223495 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 01:44:12.239822 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 01:44:12.240300 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up May 17 01:44:12.242876 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 17 01:44:12.266334 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link May 17 01:44:12.266440 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 01:44:12.266618 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 01:44:12.268415 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 01:44:12.268435 systemd-networkd[1605]: enp1s0f1np1: Configuring with /etc/systemd/network/10-1c:34:da:6f:1c:f7.network. May 17 01:44:12.268621 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 01:44:12.288589 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 01:44:12.288665 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 01:44:12.299509 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 01:44:12.299578 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 01:44:12.312534 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 01:44:12.312660 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 01:44:12.323441 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 01:44:12.333977 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 01:44:12.345104 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 01:44:12.357067 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 01:44:12.366514 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 01:44:12.366642 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 01:44:12.366748 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 01:44:12.367735 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 01:44:12.367849 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 01:44:12.379868 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 01:44:12.380031 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 01:44:12.390206 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 01:44:12.390463 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 01:44:12.402706 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 01:44:12.403071 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 01:44:12.417031 systemd[1]: Finished ensure-sysext.service. May 17 01:44:12.440348 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up May 17 01:44:12.463379 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 01:44:12.463519 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 01:44:12.474116 systemd-networkd[1605]: bond0: Configuring with /etc/systemd/network/05-bond0.network. May 17 01:44:12.474398 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link May 17 01:44:12.476102 systemd-networkd[1605]: enp1s0f0np0: Link UP May 17 01:44:12.476592 systemd-networkd[1605]: enp1s0f0np0: Gained carrier May 17 01:44:12.501286 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond May 17 01:44:12.508705 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 17 01:44:12.515483 systemd-networkd[1605]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-1c:34:da:6f:1c:f6.network. May 17 01:44:12.515880 systemd-networkd[1605]: enp1s0f1np1: Link UP May 17 01:44:12.516259 systemd-networkd[1605]: enp1s0f1np1: Gained carrier May 17 01:44:12.518590 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 01:44:12.528701 systemd[1]: Reached target network.target - Network. May 17 01:44:12.530697 systemd-networkd[1605]: bond0: Link UP May 17 01:44:12.531370 systemd-networkd[1605]: bond0: Gained carrier May 17 01:44:12.537480 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 01:44:12.564184 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 17 01:44:12.575410 systemd[1]: Reached target sysinit.target - System Initialization. May 17 01:44:12.592751 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 17 01:44:12.603277 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 25000 Mbps full duplex May 17 01:44:12.603298 kernel: bond0: active interface up! May 17 01:44:12.628370 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 17 01:44:12.639373 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 17 01:44:12.650347 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 01:44:12.650362 systemd[1]: Reached target paths.target - Path Units. May 17 01:44:12.658346 systemd[1]: Reached target time-set.target - System Time Set. May 17 01:44:12.667423 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 17 01:44:12.677392 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 17 01:44:12.688348 systemd[1]: Reached target timers.target - Timer Units. May 17 01:44:12.696762 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 17 01:44:12.714415 systemd[1]: Starting docker.socket - Docker Socket for the API... May 17 01:44:12.725334 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 25000 Mbps full duplex May 17 01:44:12.736195 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 17 01:44:12.745667 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 17 01:44:12.755444 systemd[1]: Reached target sockets.target - Socket Units. May 17 01:44:12.765350 systemd[1]: Reached target basic.target - Basic System. May 17 01:44:12.773370 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 17 01:44:12.773385 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 17 01:44:12.780363 systemd[1]: Starting containerd.service - containerd container runtime... May 17 01:44:12.790060 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 17 01:44:12.799938 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 17 01:44:12.808910 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 17 01:44:12.812026 coreos-metadata[1774]: May 17 01:44:12.811 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 01:44:12.818920 dbus-daemon[1775]: [system] SELinux support is enabled May 17 01:44:12.818992 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 17 01:44:12.820757 jq[1778]: false May 17 01:44:12.829388 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 17 01:44:12.830218 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 17 01:44:12.837828 extend-filesystems[1780]: Found loop4 May 17 01:44:12.839414 extend-filesystems[1780]: Found loop5 May 17 01:44:12.839414 extend-filesystems[1780]: Found loop6 May 17 01:44:12.839414 extend-filesystems[1780]: Found loop7 May 17 01:44:12.839414 extend-filesystems[1780]: Found sda May 17 01:44:12.839414 extend-filesystems[1780]: Found sdb May 17 01:44:12.839414 extend-filesystems[1780]: Found sdb1 May 17 01:44:12.839414 extend-filesystems[1780]: Found sdb2 May 17 01:44:12.839414 extend-filesystems[1780]: Found sdb3 May 17 01:44:12.839414 extend-filesystems[1780]: Found usr May 17 01:44:12.839414 extend-filesystems[1780]: Found sdb4 May 17 01:44:12.839414 extend-filesystems[1780]: Found sdb6 May 17 01:44:12.839414 extend-filesystems[1780]: Found sdb7 May 17 01:44:12.839414 extend-filesystems[1780]: Found sdb9 May 17 01:44:12.839414 extend-filesystems[1780]: Checking size of /dev/sdb9 May 17 01:44:13.012495 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks May 17 01:44:13.012513 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 38 scanned by (udev-worker) (1425) May 17 01:44:13.012524 extend-filesystems[1780]: Resized partition /dev/sdb9 May 17 01:44:12.840055 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 17 01:44:13.027658 extend-filesystems[1788]: resize2fs 1.47.1 (20-May-2024) May 17 01:44:12.890766 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 17 01:44:12.912990 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 17 01:44:12.955702 systemd[1]: Starting systemd-logind.service - User Login Management... May 17 01:44:12.983166 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... May 17 01:44:12.998606 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 01:44:13.052844 update_engine[1805]: I20250517 01:44:13.035282 1805 main.cc:92] Flatcar Update Engine starting May 17 01:44:13.052844 update_engine[1805]: I20250517 01:44:13.035957 1805 update_check_scheduler.cc:74] Next update check in 6m8s May 17 01:44:12.998946 systemd[1]: Starting update-engine.service - Update Engine... May 17 01:44:13.053022 jq[1806]: true May 17 01:44:13.013009 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 17 01:44:13.036837 systemd-logind[1800]: Watching system buttons on /dev/input/event3 (Power Button) May 17 01:44:13.036848 systemd-logind[1800]: Watching system buttons on /dev/input/event2 (Sleep Button) May 17 01:44:13.036857 systemd-logind[1800]: Watching system buttons on /dev/input/event0 (HID 0557:2419) May 17 01:44:13.037089 systemd-logind[1800]: New seat seat0. May 17 01:44:13.043603 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 17 01:44:13.063576 systemd[1]: Started systemd-logind.service - User Login Management. May 17 01:44:13.064952 sshd_keygen[1803]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 01:44:13.086536 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 01:44:13.086627 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 17 01:44:13.086798 systemd[1]: motdgen.service: Deactivated successfully. May 17 01:44:13.086880 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 17 01:44:13.097874 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 01:44:13.097957 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 17 01:44:13.109502 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 17 01:44:13.122582 (ntainerd)[1819]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 17 01:44:13.123708 jq[1817]: true May 17 01:44:13.125585 dbus-daemon[1775]: [system] Successfully activated service 'org.freedesktop.systemd1' May 17 01:44:13.126526 tar[1815]: linux-amd64/LICENSE May 17 01:44:13.126689 tar[1815]: linux-amd64/helm May 17 01:44:13.134036 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. May 17 01:44:13.134134 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. May 17 01:44:13.134213 systemd[1]: Started update-engine.service - Update Engine. May 17 01:44:13.145154 systemd[1]: Starting issuegen.service - Generate /run/issue... May 17 01:44:13.153343 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 01:44:13.153442 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 17 01:44:13.164422 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 01:44:13.164503 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 17 01:44:13.178037 bash[1846]: Updated "/home/core/.ssh/authorized_keys" May 17 01:44:13.183464 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 17 01:44:13.196418 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 17 01:44:13.207619 systemd[1]: issuegen.service: Deactivated successfully. May 17 01:44:13.207707 systemd[1]: Finished issuegen.service - Generate /run/issue. May 17 01:44:13.213136 locksmithd[1854]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 01:44:13.234543 systemd[1]: Starting sshkeys.service... May 17 01:44:13.242079 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 17 01:44:13.254267 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 17 01:44:13.273499 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 17 01:44:13.283832 coreos-metadata[1868]: May 17 01:44:13.283 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 01:44:13.284706 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 17 01:44:13.303459 containerd[1819]: time="2025-05-17T01:44:13.303413856Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 17 01:44:13.308587 systemd[1]: Started getty@tty1.service - Getty on tty1. May 17 01:44:13.316184 containerd[1819]: time="2025-05-17T01:44:13.316135770Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 01:44:13.316952 containerd[1819]: time="2025-05-17T01:44:13.316901925Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 01:44:13.316952 containerd[1819]: time="2025-05-17T01:44:13.316922837Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 01:44:13.316952 containerd[1819]: time="2025-05-17T01:44:13.316933206Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 01:44:13.317058 containerd[1819]: time="2025-05-17T01:44:13.317016924Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 17 01:44:13.317058 containerd[1819]: time="2025-05-17T01:44:13.317031857Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 17 01:44:13.317093 containerd[1819]: time="2025-05-17T01:44:13.317065215Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 17 01:44:13.317093 containerd[1819]: time="2025-05-17T01:44:13.317073620Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 01:44:13.317183 containerd[1819]: time="2025-05-17T01:44:13.317173240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 01:44:13.317206 containerd[1819]: time="2025-05-17T01:44:13.317183127Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 01:44:13.317206 containerd[1819]: time="2025-05-17T01:44:13.317194166Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 17 01:44:13.317206 containerd[1819]: time="2025-05-17T01:44:13.317200269Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 01:44:13.317259 containerd[1819]: time="2025-05-17T01:44:13.317251991Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 01:44:13.317259 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. May 17 01:44:13.317421 containerd[1819]: time="2025-05-17T01:44:13.317383448Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 01:44:13.317467 containerd[1819]: time="2025-05-17T01:44:13.317457501Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 01:44:13.317485 containerd[1819]: time="2025-05-17T01:44:13.317467108Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 01:44:13.317525 containerd[1819]: time="2025-05-17T01:44:13.317517645Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 01:44:13.317551 containerd[1819]: time="2025-05-17T01:44:13.317544910Z" level=info msg="metadata content store policy set" policy=shared May 17 01:44:13.326531 systemd[1]: Reached target getty.target - Login Prompts. May 17 01:44:13.328803 containerd[1819]: time="2025-05-17T01:44:13.328789735Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 01:44:13.328831 containerd[1819]: time="2025-05-17T01:44:13.328814462Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 01:44:13.328831 containerd[1819]: time="2025-05-17T01:44:13.328827188Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 17 01:44:13.328860 containerd[1819]: time="2025-05-17T01:44:13.328841080Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 17 01:44:13.328860 containerd[1819]: time="2025-05-17T01:44:13.328849555Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 01:44:13.328936 containerd[1819]: time="2025-05-17T01:44:13.328926547Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 01:44:13.329071 containerd[1819]: time="2025-05-17T01:44:13.329062988Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 01:44:13.329126 containerd[1819]: time="2025-05-17T01:44:13.329117772Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 17 01:44:13.329143 containerd[1819]: time="2025-05-17T01:44:13.329128416Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 17 01:44:13.329160 containerd[1819]: time="2025-05-17T01:44:13.329142469Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 17 01:44:13.329160 containerd[1819]: time="2025-05-17T01:44:13.329152269Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 01:44:13.329192 containerd[1819]: time="2025-05-17T01:44:13.329161427Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 01:44:13.329192 containerd[1819]: time="2025-05-17T01:44:13.329168661Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 01:44:13.329192 containerd[1819]: time="2025-05-17T01:44:13.329176124Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 01:44:13.329192 containerd[1819]: time="2025-05-17T01:44:13.329183825Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 01:44:13.329192 containerd[1819]: time="2025-05-17T01:44:13.329190902Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 01:44:13.329255 containerd[1819]: time="2025-05-17T01:44:13.329198083Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 01:44:13.329255 containerd[1819]: time="2025-05-17T01:44:13.329204758Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 01:44:13.329255 containerd[1819]: time="2025-05-17T01:44:13.329215485Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 01:44:13.329255 containerd[1819]: time="2025-05-17T01:44:13.329226289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 01:44:13.329255 containerd[1819]: time="2025-05-17T01:44:13.329233451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 01:44:13.329255 containerd[1819]: time="2025-05-17T01:44:13.329240970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 01:44:13.329255 containerd[1819]: time="2025-05-17T01:44:13.329247865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 01:44:13.329255 containerd[1819]: time="2025-05-17T01:44:13.329254716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 01:44:13.329374 containerd[1819]: time="2025-05-17T01:44:13.329261341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 01:44:13.329374 containerd[1819]: time="2025-05-17T01:44:13.329268502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 01:44:13.329374 containerd[1819]: time="2025-05-17T01:44:13.329285586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 17 01:44:13.329374 containerd[1819]: time="2025-05-17T01:44:13.329295203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 17 01:44:13.329374 containerd[1819]: time="2025-05-17T01:44:13.329301548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 01:44:13.329374 containerd[1819]: time="2025-05-17T01:44:13.329307635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 17 01:44:13.329374 containerd[1819]: time="2025-05-17T01:44:13.329314346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 01:44:13.329374 containerd[1819]: time="2025-05-17T01:44:13.329322569Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 17 01:44:13.329374 containerd[1819]: time="2025-05-17T01:44:13.329338600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 17 01:44:13.329374 containerd[1819]: time="2025-05-17T01:44:13.329345838Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 01:44:13.329374 containerd[1819]: time="2025-05-17T01:44:13.329352788Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 01:44:13.329521 containerd[1819]: time="2025-05-17T01:44:13.329376177Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 01:44:13.329521 containerd[1819]: time="2025-05-17T01:44:13.329386198Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 17 01:44:13.329521 containerd[1819]: time="2025-05-17T01:44:13.329392351Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 01:44:13.329521 containerd[1819]: time="2025-05-17T01:44:13.329398800Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 17 01:44:13.329521 containerd[1819]: time="2025-05-17T01:44:13.329404741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 01:44:13.329521 containerd[1819]: time="2025-05-17T01:44:13.329411640Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 17 01:44:13.329521 containerd[1819]: time="2025-05-17T01:44:13.329418021Z" level=info msg="NRI interface is disabled by configuration." May 17 01:44:13.329521 containerd[1819]: time="2025-05-17T01:44:13.329423535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 01:44:13.329628 containerd[1819]: time="2025-05-17T01:44:13.329588523Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 01:44:13.329628 containerd[1819]: time="2025-05-17T01:44:13.329622508Z" level=info msg="Connect containerd service" May 17 01:44:13.329721 containerd[1819]: time="2025-05-17T01:44:13.329639576Z" level=info msg="using legacy CRI server" May 17 01:44:13.329721 containerd[1819]: time="2025-05-17T01:44:13.329643707Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 17 01:44:13.329863 containerd[1819]: time="2025-05-17T01:44:13.329848339Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 01:44:13.330298 containerd[1819]: time="2025-05-17T01:44:13.330283987Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 01:44:13.330401 containerd[1819]: time="2025-05-17T01:44:13.330379031Z" level=info msg="Start subscribing containerd event" May 17 01:44:13.330422 containerd[1819]: time="2025-05-17T01:44:13.330411672Z" level=info msg="Start recovering state" May 17 01:44:13.330474 containerd[1819]: time="2025-05-17T01:44:13.330466509Z" level=info msg="Start event monitor" May 17 01:44:13.330491 containerd[1819]: time="2025-05-17T01:44:13.330470196Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 01:44:13.330491 containerd[1819]: time="2025-05-17T01:44:13.330480521Z" level=info msg="Start snapshots syncer" May 17 01:44:13.330523 containerd[1819]: time="2025-05-17T01:44:13.330494425Z" level=info msg="Start cni network conf syncer for default" May 17 01:44:13.330523 containerd[1819]: time="2025-05-17T01:44:13.330501495Z" level=info msg="Start streaming server" May 17 01:44:13.330523 containerd[1819]: time="2025-05-17T01:44:13.330510686Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 01:44:13.330567 containerd[1819]: time="2025-05-17T01:44:13.330544259Z" level=info msg="containerd successfully booted in 0.027624s" May 17 01:44:13.335722 systemd[1]: Started containerd.service - containerd container runtime. May 17 01:44:13.394318 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 May 17 01:44:13.418624 extend-filesystems[1788]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required May 17 01:44:13.418624 extend-filesystems[1788]: old_desc_blocks = 1, new_desc_blocks = 56 May 17 01:44:13.418624 extend-filesystems[1788]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. May 17 01:44:13.460380 extend-filesystems[1780]: Resized filesystem in /dev/sdb9 May 17 01:44:13.460426 tar[1815]: linux-amd64/README.md May 17 01:44:13.419463 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 01:44:13.419567 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 17 01:44:13.484320 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 17 01:44:14.239445 systemd-networkd[1605]: bond0: Gained IPv6LL May 17 01:44:14.240850 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 17 01:44:14.252751 systemd[1]: Reached target network-online.target - Network is Online. May 17 01:44:14.273503 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 01:44:14.283987 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 17 01:44:14.301594 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 17 01:44:14.995089 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 01:44:15.018477 (kubelet)[1914]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 01:44:15.395783 kernel: mlx5_core 0000:01:00.0: lag map: port 1:1 port 2:2 May 17 01:44:15.395925 kernel: mlx5_core 0000:01:00.0: shared_fdb:0 mode:queue_affinity May 17 01:44:15.451792 kubelet[1914]: E0517 01:44:15.451721 1914 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 01:44:15.452797 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 01:44:15.452871 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 01:44:16.044042 systemd-timesyncd[1769]: Contacted time server 216.229.4.69:123 (0.flatcar.pool.ntp.org). May 17 01:44:16.044204 systemd-timesyncd[1769]: Initial clock synchronization to Sat 2025-05-17 01:44:16.403415 UTC. May 17 01:44:16.386569 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 17 01:44:16.403589 systemd[1]: Started sshd@0-145.40.90.165:22-147.75.109.163:50524.service - OpenSSH per-connection server daemon (147.75.109.163:50524). May 17 01:44:16.442464 sshd[1934]: Accepted publickey for core from 147.75.109.163 port 50524 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 01:44:16.443566 sshd[1934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 01:44:16.449216 systemd-logind[1800]: New session 1 of user core. May 17 01:44:16.450066 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 17 01:44:16.475675 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 17 01:44:16.488159 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 17 01:44:16.513649 systemd[1]: Starting user@500.service - User Manager for UID 500... May 17 01:44:16.532805 (systemd)[1938]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 01:44:16.616055 systemd[1938]: Queued start job for default target default.target. May 17 01:44:16.627929 systemd[1938]: Created slice app.slice - User Application Slice. May 17 01:44:16.627942 systemd[1938]: Reached target paths.target - Paths. May 17 01:44:16.627950 systemd[1938]: Reached target timers.target - Timers. May 17 01:44:16.628580 systemd[1938]: Starting dbus.socket - D-Bus User Message Bus Socket... May 17 01:44:16.634028 systemd[1938]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 17 01:44:16.634056 systemd[1938]: Reached target sockets.target - Sockets. May 17 01:44:16.634065 systemd[1938]: Reached target basic.target - Basic System. May 17 01:44:16.634085 systemd[1938]: Reached target default.target - Main User Target. May 17 01:44:16.634101 systemd[1938]: Startup finished in 94ms. May 17 01:44:16.634177 systemd[1]: Started user@500.service - User Manager for UID 500. May 17 01:44:16.643726 coreos-metadata[1868]: May 17 01:44:16.643 INFO Fetch successful May 17 01:44:16.646198 systemd[1]: Started session-1.scope - Session 1 of User core. May 17 01:44:16.676447 unknown[1868]: wrote ssh authorized keys file for user: core May 17 01:44:16.701738 systemd[1]: Started sshd@1-145.40.90.165:22-147.75.109.163:50530.service - OpenSSH per-connection server daemon (147.75.109.163:50530). May 17 01:44:16.708898 update-ssh-keys[1947]: Updated "/home/core/.ssh/authorized_keys" May 17 01:44:16.712779 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 17 01:44:16.724156 systemd[1]: Finished sshkeys.service. May 17 01:44:16.755021 sshd[1951]: Accepted publickey for core from 147.75.109.163 port 50530 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 01:44:16.755722 sshd[1951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 01:44:16.758016 systemd-logind[1800]: New session 2 of user core. May 17 01:44:16.772405 systemd[1]: Started session-2.scope - Session 2 of User core. May 17 01:44:16.853019 sshd[1951]: pam_unix(sshd:session): session closed for user core May 17 01:44:16.877585 systemd[1]: sshd@1-145.40.90.165:22-147.75.109.163:50530.service: Deactivated successfully. May 17 01:44:16.878251 systemd[1]: session-2.scope: Deactivated successfully. May 17 01:44:16.878920 systemd-logind[1800]: Session 2 logged out. Waiting for processes to exit. May 17 01:44:16.879529 systemd[1]: Started sshd@2-145.40.90.165:22-147.75.109.163:50544.service - OpenSSH per-connection server daemon (147.75.109.163:50544). May 17 01:44:16.891048 systemd-logind[1800]: Removed session 2. May 17 01:44:16.929654 sshd[1961]: Accepted publickey for core from 147.75.109.163 port 50544 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 01:44:16.930606 sshd[1961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 01:44:16.934029 systemd-logind[1800]: New session 3 of user core. May 17 01:44:16.954525 systemd[1]: Started session-3.scope - Session 3 of User core. May 17 01:44:17.032901 sshd[1961]: pam_unix(sshd:session): session closed for user core May 17 01:44:17.041071 systemd[1]: sshd@2-145.40.90.165:22-147.75.109.163:50544.service: Deactivated successfully. May 17 01:44:17.045122 systemd[1]: session-3.scope: Deactivated successfully. May 17 01:44:17.047057 systemd-logind[1800]: Session 3 logged out. Waiting for processes to exit. May 17 01:44:17.050098 systemd-logind[1800]: Removed session 3. May 17 01:44:17.080202 coreos-metadata[1774]: May 17 01:44:17.080 INFO Fetch successful May 17 01:44:17.184796 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 17 01:44:17.197596 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... May 17 01:44:17.560248 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. May 17 01:44:17.574146 systemd[1]: Reached target multi-user.target - Multi-User System. May 17 01:44:17.585076 systemd[1]: Startup finished in 2.686s (kernel) + 24.384s (initrd) + 10.334s (userspace) = 37.405s. May 17 01:44:17.604276 login[1882]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 17 01:44:17.607523 systemd-logind[1800]: New session 4 of user core. May 17 01:44:17.624632 systemd[1]: Started session-4.scope - Session 4 of User core. May 17 01:44:17.632531 login[1873]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 17 01:44:17.635218 systemd-logind[1800]: New session 5 of user core. May 17 01:44:17.635871 systemd[1]: Started session-5.scope - Session 5 of User core. May 17 01:44:25.707738 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 01:44:25.721591 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 01:44:25.957665 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 01:44:25.962470 (kubelet)[2006]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 01:44:26.002132 kubelet[2006]: E0517 01:44:26.002104 2006 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 01:44:26.004780 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 01:44:26.004924 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 01:44:27.318633 systemd[1]: Started sshd@3-145.40.90.165:22-147.75.109.163:38830.service - OpenSSH per-connection server daemon (147.75.109.163:38830). May 17 01:44:27.349118 sshd[2024]: Accepted publickey for core from 147.75.109.163 port 38830 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 01:44:27.349785 sshd[2024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 01:44:27.352333 systemd-logind[1800]: New session 6 of user core. May 17 01:44:27.362554 systemd[1]: Started session-6.scope - Session 6 of User core. May 17 01:44:27.414199 sshd[2024]: pam_unix(sshd:session): session closed for user core May 17 01:44:27.427981 systemd[1]: sshd@3-145.40.90.165:22-147.75.109.163:38830.service: Deactivated successfully. May 17 01:44:27.428772 systemd[1]: session-6.scope: Deactivated successfully. May 17 01:44:27.429515 systemd-logind[1800]: Session 6 logged out. Waiting for processes to exit. May 17 01:44:27.430200 systemd[1]: Started sshd@4-145.40.90.165:22-147.75.109.163:38844.service - OpenSSH per-connection server daemon (147.75.109.163:38844). May 17 01:44:27.430820 systemd-logind[1800]: Removed session 6. May 17 01:44:27.472178 sshd[2031]: Accepted publickey for core from 147.75.109.163 port 38844 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 01:44:27.473107 sshd[2031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 01:44:27.476316 systemd-logind[1800]: New session 7 of user core. May 17 01:44:27.495621 systemd[1]: Started session-7.scope - Session 7 of User core. May 17 01:44:27.548565 sshd[2031]: pam_unix(sshd:session): session closed for user core May 17 01:44:27.565160 systemd[1]: sshd@4-145.40.90.165:22-147.75.109.163:38844.service: Deactivated successfully. May 17 01:44:27.565816 systemd[1]: session-7.scope: Deactivated successfully. May 17 01:44:27.566500 systemd-logind[1800]: Session 7 logged out. Waiting for processes to exit. May 17 01:44:27.575718 systemd[1]: Started sshd@5-145.40.90.165:22-147.75.109.163:38856.service - OpenSSH per-connection server daemon (147.75.109.163:38856). May 17 01:44:27.576349 systemd-logind[1800]: Removed session 7. May 17 01:44:27.613513 sshd[2038]: Accepted publickey for core from 147.75.109.163 port 38856 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 01:44:27.614321 sshd[2038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 01:44:27.617143 systemd-logind[1800]: New session 8 of user core. May 17 01:44:27.633520 systemd[1]: Started session-8.scope - Session 8 of User core. May 17 01:44:27.697897 sshd[2038]: pam_unix(sshd:session): session closed for user core May 17 01:44:27.727535 systemd[1]: sshd@5-145.40.90.165:22-147.75.109.163:38856.service: Deactivated successfully. May 17 01:44:27.731190 systemd[1]: session-8.scope: Deactivated successfully. May 17 01:44:27.734575 systemd-logind[1800]: Session 8 logged out. Waiting for processes to exit. May 17 01:44:27.749039 systemd[1]: Started sshd@6-145.40.90.165:22-147.75.109.163:38862.service - OpenSSH per-connection server daemon (147.75.109.163:38862). May 17 01:44:27.751795 systemd-logind[1800]: Removed session 8. May 17 01:44:27.825681 sshd[2045]: Accepted publickey for core from 147.75.109.163 port 38862 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 01:44:27.826881 sshd[2045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 01:44:27.831156 systemd-logind[1800]: New session 9 of user core. May 17 01:44:27.849747 systemd[1]: Started session-9.scope - Session 9 of User core. May 17 01:44:27.917477 sudo[2048]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 17 01:44:27.917630 sudo[2048]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 01:44:27.929941 sudo[2048]: pam_unix(sudo:session): session closed for user root May 17 01:44:27.930928 sshd[2045]: pam_unix(sshd:session): session closed for user core May 17 01:44:27.943235 systemd[1]: sshd@6-145.40.90.165:22-147.75.109.163:38862.service: Deactivated successfully. May 17 01:44:27.944177 systemd[1]: session-9.scope: Deactivated successfully. May 17 01:44:27.945080 systemd-logind[1800]: Session 9 logged out. Waiting for processes to exit. May 17 01:44:27.945977 systemd[1]: Started sshd@7-145.40.90.165:22-147.75.109.163:53878.service - OpenSSH per-connection server daemon (147.75.109.163:53878). May 17 01:44:27.946630 systemd-logind[1800]: Removed session 9. May 17 01:44:27.993776 sshd[2053]: Accepted publickey for core from 147.75.109.163 port 53878 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 01:44:27.994993 sshd[2053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 01:44:27.999118 systemd-logind[1800]: New session 10 of user core. May 17 01:44:28.009528 systemd[1]: Started session-10.scope - Session 10 of User core. May 17 01:44:28.063793 sudo[2057]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 17 01:44:28.063948 sudo[2057]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 01:44:28.065976 sudo[2057]: pam_unix(sudo:session): session closed for user root May 17 01:44:28.068632 sudo[2056]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 17 01:44:28.068790 sudo[2056]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 01:44:28.084567 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 17 01:44:28.085711 auditctl[2060]: No rules May 17 01:44:28.085937 systemd[1]: audit-rules.service: Deactivated successfully. May 17 01:44:28.086059 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 17 01:44:28.087642 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 01:44:28.103634 augenrules[2078]: No rules May 17 01:44:28.103984 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 01:44:28.104550 sudo[2056]: pam_unix(sudo:session): session closed for user root May 17 01:44:28.105542 sshd[2053]: pam_unix(sshd:session): session closed for user core May 17 01:44:28.107354 systemd[1]: sshd@7-145.40.90.165:22-147.75.109.163:53878.service: Deactivated successfully. May 17 01:44:28.108100 systemd[1]: session-10.scope: Deactivated successfully. May 17 01:44:28.108555 systemd-logind[1800]: Session 10 logged out. Waiting for processes to exit. May 17 01:44:28.109520 systemd[1]: Started sshd@8-145.40.90.165:22-147.75.109.163:53882.service - OpenSSH per-connection server daemon (147.75.109.163:53882). May 17 01:44:28.110092 systemd-logind[1800]: Removed session 10. May 17 01:44:28.150758 sshd[2086]: Accepted publickey for core from 147.75.109.163 port 53882 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 01:44:28.151618 sshd[2086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 01:44:28.154960 systemd-logind[1800]: New session 11 of user core. May 17 01:44:28.172706 systemd[1]: Started session-11.scope - Session 11 of User core. May 17 01:44:28.226324 sudo[2089]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 01:44:28.226477 sudo[2089]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 01:44:28.496616 systemd[1]: Starting docker.service - Docker Application Container Engine... May 17 01:44:28.496679 (dockerd)[2112]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 17 01:44:28.741735 dockerd[2112]: time="2025-05-17T01:44:28.741678538Z" level=info msg="Starting up" May 17 01:44:28.808395 dockerd[2112]: time="2025-05-17T01:44:28.808291189Z" level=info msg="Loading containers: start." May 17 01:44:28.885311 kernel: Initializing XFRM netlink socket May 17 01:44:28.943292 systemd-networkd[1605]: docker0: Link UP May 17 01:44:28.962256 dockerd[2112]: time="2025-05-17T01:44:28.962234450Z" level=info msg="Loading containers: done." May 17 01:44:28.972682 dockerd[2112]: time="2025-05-17T01:44:28.972635779Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 01:44:28.972757 dockerd[2112]: time="2025-05-17T01:44:28.972685694Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 17 01:44:28.972757 dockerd[2112]: time="2025-05-17T01:44:28.972740625Z" level=info msg="Daemon has completed initialization" May 17 01:44:28.972692 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck93657115-merged.mount: Deactivated successfully. May 17 01:44:28.987027 dockerd[2112]: time="2025-05-17T01:44:28.986951319Z" level=info msg="API listen on /run/docker.sock" May 17 01:44:28.987088 systemd[1]: Started docker.service - Docker Application Container Engine. May 17 01:44:29.733502 containerd[1819]: time="2025-05-17T01:44:29.733453724Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 17 01:44:30.217737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount694102685.mount: Deactivated successfully. May 17 01:44:30.984047 containerd[1819]: time="2025-05-17T01:44:30.983996929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:44:30.984257 containerd[1819]: time="2025-05-17T01:44:30.984178673Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.5: active requests=0, bytes read=28797811" May 17 01:44:30.984651 containerd[1819]: time="2025-05-17T01:44:30.984615364Z" level=info msg="ImageCreate event name:\"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:44:30.986226 containerd[1819]: time="2025-05-17T01:44:30.986184382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:44:30.986878 containerd[1819]: time="2025-05-17T01:44:30.986838482Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.5\" with image id \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\", size \"28794611\" in 1.253357242s" May 17 01:44:30.986878 containerd[1819]: time="2025-05-17T01:44:30.986854563Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\"" May 17 01:44:30.987196 containerd[1819]: time="2025-05-17T01:44:30.987186295Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 17 01:44:31.963065 containerd[1819]: time="2025-05-17T01:44:31.963010592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:44:31.963199 containerd[1819]: time="2025-05-17T01:44:31.963175202Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.5: active requests=0, bytes read=24782523" May 17 01:44:31.963624 containerd[1819]: time="2025-05-17T01:44:31.963610920Z" level=info msg="ImageCreate event name:\"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:44:31.965188 containerd[1819]: time="2025-05-17T01:44:31.965174418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:44:31.965854 containerd[1819]: time="2025-05-17T01:44:31.965837914Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.5\" with image id \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\", size \"26384363\" in 978.636515ms" May 17 01:44:31.965892 containerd[1819]: time="2025-05-17T01:44:31.965856125Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\"" May 17 01:44:31.966163 containerd[1819]: time="2025-05-17T01:44:31.966152154Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 17 01:44:32.852863 containerd[1819]: time="2025-05-17T01:44:32.852806416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:44:32.853078 containerd[1819]: time="2025-05-17T01:44:32.852994426Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.5: active requests=0, bytes read=19176063" May 17 01:44:32.853494 containerd[1819]: time="2025-05-17T01:44:32.853455149Z" level=info msg="ImageCreate event name:\"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:44:32.856185 containerd[1819]: time="2025-05-17T01:44:32.856140661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:44:32.856747 containerd[1819]: time="2025-05-17T01:44:32.856704806Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.5\" with image id \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\", size \"20777921\" in 890.53548ms" May 17 01:44:32.856747 containerd[1819]: time="2025-05-17T01:44:32.856722123Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\"" May 17 01:44:32.856963 containerd[1819]: time="2025-05-17T01:44:32.856950260Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 17 01:44:33.740400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2648287508.mount: Deactivated successfully. May 17 01:44:33.935146 containerd[1819]: time="2025-05-17T01:44:33.935118783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:44:33.935363 containerd[1819]: time="2025-05-17T01:44:33.935257467Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.5: active requests=0, bytes read=30892872" May 17 01:44:33.935678 containerd[1819]: time="2025-05-17T01:44:33.935635664Z" level=info msg="ImageCreate event name:\"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:44:33.936666 containerd[1819]: time="2025-05-17T01:44:33.936626393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:44:33.937063 containerd[1819]: time="2025-05-17T01:44:33.937020427Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.5\" with image id \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\", repo tag \"registry.k8s.io/kube-proxy:v1.32.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\", size \"30891891\" in 1.080054513s" May 17 01:44:33.937063 containerd[1819]: time="2025-05-17T01:44:33.937038740Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\"" May 17 01:44:33.937324 containerd[1819]: time="2025-05-17T01:44:33.937311916Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 01:44:34.430628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3822838842.mount: Deactivated successfully. May 17 01:44:34.946654 containerd[1819]: time="2025-05-17T01:44:34.946627940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:44:34.946886 containerd[1819]: time="2025-05-17T01:44:34.946868134Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 17 01:44:34.947253 containerd[1819]: time="2025-05-17T01:44:34.947242176Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:44:34.948938 containerd[1819]: time="2025-05-17T01:44:34.948898208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:44:34.949576 containerd[1819]: time="2025-05-17T01:44:34.949533075Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.012204648s" May 17 01:44:34.949576 containerd[1819]: time="2025-05-17T01:44:34.949551749Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 17 01:44:34.949823 containerd[1819]: time="2025-05-17T01:44:34.949811447Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 01:44:35.378970 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3100706787.mount: Deactivated successfully. May 17 01:44:35.380347 containerd[1819]: time="2025-05-17T01:44:35.380274336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:44:35.380564 containerd[1819]: time="2025-05-17T01:44:35.380544520Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 17 01:44:35.381008 containerd[1819]: time="2025-05-17T01:44:35.380959498Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:44:35.382476 containerd[1819]: time="2025-05-17T01:44:35.382434301Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:44:35.382794 containerd[1819]: time="2025-05-17T01:44:35.382767496Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 432.940275ms" May 17 01:44:35.382794 containerd[1819]: time="2025-05-17T01:44:35.382783211Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 17 01:44:35.383175 containerd[1819]: time="2025-05-17T01:44:35.383163068Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 17 01:44:35.905000 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2419330060.mount: Deactivated successfully. May 17 01:44:36.075084 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 01:44:36.088414 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 01:44:36.311582 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 01:44:36.315507 (kubelet)[2459]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 01:44:36.339893 kubelet[2459]: E0517 01:44:36.339856 2459 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 01:44:36.341267 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 01:44:36.341403 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 01:44:37.010938 containerd[1819]: time="2025-05-17T01:44:37.010911435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:44:37.011144 containerd[1819]: time="2025-05-17T01:44:37.011103738Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 17 01:44:37.011733 containerd[1819]: time="2025-05-17T01:44:37.011696911Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:44:37.013475 containerd[1819]: time="2025-05-17T01:44:37.013433816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:44:37.014704 containerd[1819]: time="2025-05-17T01:44:37.014661583Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 1.631482567s" May 17 01:44:37.014704 containerd[1819]: time="2025-05-17T01:44:37.014679514Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 17 01:44:38.679042 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 01:44:38.695660 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 01:44:38.709858 systemd[1]: Reloading requested from client PID 2534 ('systemctl') (unit session-11.scope)... May 17 01:44:38.709865 systemd[1]: Reloading... May 17 01:44:38.744341 zram_generator::config[2573]: No configuration found. May 17 01:44:38.812607 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 01:44:38.873056 systemd[1]: Reloading finished in 162 ms. May 17 01:44:38.915318 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 17 01:44:38.915382 systemd[1]: kubelet.service: Failed with result 'signal'. May 17 01:44:38.915532 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 01:44:38.934593 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 01:44:39.188860 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 01:44:39.191034 (kubelet)[2638]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 01:44:39.218696 kubelet[2638]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 01:44:39.218696 kubelet[2638]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 01:44:39.218696 kubelet[2638]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 01:44:39.218910 kubelet[2638]: I0517 01:44:39.218710 2638 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 01:44:39.796665 kubelet[2638]: I0517 01:44:39.796628 2638 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 17 01:44:39.796665 kubelet[2638]: I0517 01:44:39.796641 2638 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 01:44:39.796838 kubelet[2638]: I0517 01:44:39.796810 2638 server.go:954] "Client rotation is on, will bootstrap in background" May 17 01:44:39.819375 kubelet[2638]: E0517 01:44:39.819301 2638 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://145.40.90.165:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 145.40.90.165:6443: connect: connection refused" logger="UnhandledError" May 17 01:44:39.820969 kubelet[2638]: I0517 01:44:39.820937 2638 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 01:44:39.827482 kubelet[2638]: E0517 01:44:39.827444 2638 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 01:44:39.827482 kubelet[2638]: I0517 01:44:39.827460 2638 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 01:44:39.836279 kubelet[2638]: I0517 01:44:39.836238 2638 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 01:44:39.837368 kubelet[2638]: I0517 01:44:39.837321 2638 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 01:44:39.837486 kubelet[2638]: I0517 01:44:39.837340 2638 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-n-d569167b40","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 01:44:39.837486 kubelet[2638]: I0517 01:44:39.837464 2638 topology_manager.go:138] "Creating topology manager with none policy" May 17 01:44:39.837486 kubelet[2638]: I0517 01:44:39.837471 2638 container_manager_linux.go:304] "Creating device plugin manager" May 17 01:44:39.837599 kubelet[2638]: I0517 01:44:39.837545 2638 state_mem.go:36] "Initialized new in-memory state store" May 17 01:44:39.841008 kubelet[2638]: I0517 01:44:39.840959 2638 kubelet.go:446] "Attempting to sync node with API server" May 17 01:44:39.841008 kubelet[2638]: I0517 01:44:39.840970 2638 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 01:44:39.841008 kubelet[2638]: I0517 01:44:39.840979 2638 kubelet.go:352] "Adding apiserver pod source" May 17 01:44:39.841008 kubelet[2638]: I0517 01:44:39.840985 2638 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 01:44:39.843939 kubelet[2638]: I0517 01:44:39.843898 2638 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 01:44:39.844171 kubelet[2638]: I0517 01:44:39.844133 2638 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 01:44:39.844581 kubelet[2638]: W0517 01:44:39.844545 2638 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 01:44:39.846144 kubelet[2638]: I0517 01:44:39.846108 2638 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 01:44:39.846144 kubelet[2638]: I0517 01:44:39.846123 2638 server.go:1287] "Started kubelet" May 17 01:44:39.846236 kubelet[2638]: I0517 01:44:39.846212 2638 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 17 01:44:39.850409 kubelet[2638]: W0517 01:44:39.850213 2638 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://145.40.90.165:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 145.40.90.165:6443: connect: connection refused May 17 01:44:39.850409 kubelet[2638]: E0517 01:44:39.850284 2638 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://145.40.90.165:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 145.40.90.165:6443: connect: connection refused" logger="UnhandledError" May 17 01:44:39.850409 kubelet[2638]: W0517 01:44:39.850294 2638 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://145.40.90.165:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-n-d569167b40&limit=500&resourceVersion=0": dial tcp 145.40.90.165:6443: connect: connection refused May 17 01:44:39.850409 kubelet[2638]: E0517 01:44:39.850341 2638 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://145.40.90.165:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-n-d569167b40&limit=500&resourceVersion=0\": dial tcp 145.40.90.165:6443: connect: connection refused" logger="UnhandledError" May 17 01:44:39.851157 kubelet[2638]: I0517 01:44:39.851110 2638 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 01:44:39.851332 kubelet[2638]: I0517 01:44:39.851295 2638 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 01:44:39.852263 kubelet[2638]: E0517 01:44:39.851027 2638 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://145.40.90.165:6443/api/v1/namespaces/default/events\": dial tcp 145.40.90.165:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.3-n-d569167b40.18402d22164e303b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.3-n-d569167b40,UID:ci-4081.3.3-n-d569167b40,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.3-n-d569167b40,},FirstTimestamp:2025-05-17 01:44:39.846113339 +0000 UTC m=+0.653258374,LastTimestamp:2025-05-17 01:44:39.846113339 +0000 UTC m=+0.653258374,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.3-n-d569167b40,}" May 17 01:44:39.852340 kubelet[2638]: I0517 01:44:39.852317 2638 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 01:44:39.852388 kubelet[2638]: I0517 01:44:39.852338 2638 server.go:479] "Adding debug handlers to kubelet server" May 17 01:44:39.852388 kubelet[2638]: I0517 01:44:39.852355 2638 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 01:44:39.852446 kubelet[2638]: I0517 01:44:39.852389 2638 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 01:44:39.852472 kubelet[2638]: I0517 01:44:39.852450 2638 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 01:44:39.852472 kubelet[2638]: E0517 01:44:39.852455 2638 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-d569167b40\" not found" May 17 01:44:39.852509 kubelet[2638]: E0517 01:44:39.852471 2638 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 01:44:39.852509 kubelet[2638]: I0517 01:44:39.852497 2638 reconciler.go:26] "Reconciler: start to sync state" May 17 01:44:39.852612 kubelet[2638]: E0517 01:44:39.852590 2638 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://145.40.90.165:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-n-d569167b40?timeout=10s\": dial tcp 145.40.90.165:6443: connect: connection refused" interval="200ms" May 17 01:44:39.852686 kubelet[2638]: W0517 01:44:39.852660 2638 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://145.40.90.165:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 145.40.90.165:6443: connect: connection refused May 17 01:44:39.852710 kubelet[2638]: E0517 01:44:39.852697 2638 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://145.40.90.165:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 145.40.90.165:6443: connect: connection refused" logger="UnhandledError" May 17 01:44:39.852758 kubelet[2638]: I0517 01:44:39.852750 2638 factory.go:221] Registration of the systemd container factory successfully May 17 01:44:39.852798 kubelet[2638]: I0517 01:44:39.852790 2638 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 01:44:39.853222 kubelet[2638]: I0517 01:44:39.853215 2638 factory.go:221] Registration of the containerd container factory successfully May 17 01:44:39.860171 kubelet[2638]: I0517 01:44:39.860151 2638 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 01:44:39.860792 kubelet[2638]: I0517 01:44:39.860766 2638 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 01:44:39.860792 kubelet[2638]: I0517 01:44:39.860794 2638 status_manager.go:227] "Starting to sync pod status with apiserver" May 17 01:44:39.860873 kubelet[2638]: I0517 01:44:39.860821 2638 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 01:44:39.860873 kubelet[2638]: I0517 01:44:39.860826 2638 kubelet.go:2382] "Starting kubelet main sync loop" May 17 01:44:39.860873 kubelet[2638]: E0517 01:44:39.860849 2638 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 01:44:39.861739 kubelet[2638]: W0517 01:44:39.861727 2638 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://145.40.90.165:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 145.40.90.165:6443: connect: connection refused May 17 01:44:39.861788 kubelet[2638]: E0517 01:44:39.861746 2638 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://145.40.90.165:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 145.40.90.165:6443: connect: connection refused" logger="UnhandledError" May 17 01:44:39.953424 kubelet[2638]: E0517 01:44:39.953367 2638 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-d569167b40\" not found" May 17 01:44:39.961677 kubelet[2638]: E0517 01:44:39.961565 2638 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 17 01:44:40.007765 kubelet[2638]: I0517 01:44:40.007718 2638 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 01:44:40.007765 kubelet[2638]: I0517 01:44:40.007755 2638 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 01:44:40.008078 kubelet[2638]: I0517 01:44:40.007794 2638 state_mem.go:36] "Initialized new in-memory state store" May 17 01:44:40.009632 kubelet[2638]: I0517 01:44:40.009613 2638 policy_none.go:49] "None policy: Start" May 17 01:44:40.009632 kubelet[2638]: I0517 01:44:40.009631 2638 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 01:44:40.009745 kubelet[2638]: I0517 01:44:40.009643 2638 state_mem.go:35] "Initializing new in-memory state store" May 17 01:44:40.012830 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 17 01:44:40.027198 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 17 01:44:40.029398 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 17 01:44:40.040077 kubelet[2638]: I0517 01:44:40.040036 2638 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 01:44:40.040176 kubelet[2638]: I0517 01:44:40.040165 2638 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 01:44:40.040211 kubelet[2638]: I0517 01:44:40.040176 2638 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 01:44:40.040353 kubelet[2638]: I0517 01:44:40.040314 2638 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 01:44:40.040799 kubelet[2638]: E0517 01:44:40.040757 2638 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 01:44:40.040799 kubelet[2638]: E0517 01:44:40.040786 2638 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.3-n-d569167b40\" not found" May 17 01:44:40.053289 kubelet[2638]: E0517 01:44:40.053178 2638 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://145.40.90.165:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-n-d569167b40?timeout=10s\": dial tcp 145.40.90.165:6443: connect: connection refused" interval="400ms" May 17 01:44:40.144944 kubelet[2638]: I0517 01:44:40.144851 2638 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.3-n-d569167b40" May 17 01:44:40.145645 kubelet[2638]: E0517 01:44:40.145556 2638 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://145.40.90.165:6443/api/v1/nodes\": dial tcp 145.40.90.165:6443: connect: connection refused" node="ci-4081.3.3-n-d569167b40" May 17 01:44:40.182935 systemd[1]: Created slice kubepods-burstable-pod16e13da4700e7c812f46e3cc2b23166e.slice - libcontainer container kubepods-burstable-pod16e13da4700e7c812f46e3cc2b23166e.slice. May 17 01:44:40.216057 kubelet[2638]: E0517 01:44:40.215993 2638 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-n-d569167b40\" not found" node="ci-4081.3.3-n-d569167b40" May 17 01:44:40.223750 systemd[1]: Created slice kubepods-burstable-pod474c560cb8f203b083232c4d35b5d361.slice - libcontainer container kubepods-burstable-pod474c560cb8f203b083232c4d35b5d361.slice. May 17 01:44:40.241354 kubelet[2638]: E0517 01:44:40.241271 2638 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-n-d569167b40\" not found" node="ci-4081.3.3-n-d569167b40" May 17 01:44:40.248505 systemd[1]: Created slice kubepods-burstable-podb2fe65cf587d99c0f358d633c7336a27.slice - libcontainer container kubepods-burstable-podb2fe65cf587d99c0f358d633c7336a27.slice. May 17 01:44:40.253036 kubelet[2638]: E0517 01:44:40.252956 2638 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-n-d569167b40\" not found" node="ci-4081.3.3-n-d569167b40" May 17 01:44:40.255185 kubelet[2638]: I0517 01:44:40.255077 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/474c560cb8f203b083232c4d35b5d361-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-n-d569167b40\" (UID: \"474c560cb8f203b083232c4d35b5d361\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-d569167b40" May 17 01:44:40.255185 kubelet[2638]: I0517 01:44:40.255156 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/16e13da4700e7c812f46e3cc2b23166e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-n-d569167b40\" (UID: \"16e13da4700e7c812f46e3cc2b23166e\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-d569167b40" May 17 01:44:40.255449 kubelet[2638]: I0517 01:44:40.255210 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/474c560cb8f203b083232c4d35b5d361-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-n-d569167b40\" (UID: \"474c560cb8f203b083232c4d35b5d361\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-d569167b40" May 17 01:44:40.255449 kubelet[2638]: I0517 01:44:40.255264 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/474c560cb8f203b083232c4d35b5d361-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-n-d569167b40\" (UID: \"474c560cb8f203b083232c4d35b5d361\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-d569167b40" May 17 01:44:40.255449 kubelet[2638]: I0517 01:44:40.255332 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/474c560cb8f203b083232c4d35b5d361-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-n-d569167b40\" (UID: \"474c560cb8f203b083232c4d35b5d361\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-d569167b40" May 17 01:44:40.255449 kubelet[2638]: I0517 01:44:40.255376 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/16e13da4700e7c812f46e3cc2b23166e-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-n-d569167b40\" (UID: \"16e13da4700e7c812f46e3cc2b23166e\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-d569167b40" May 17 01:44:40.255449 kubelet[2638]: I0517 01:44:40.255418 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/16e13da4700e7c812f46e3cc2b23166e-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-n-d569167b40\" (UID: \"16e13da4700e7c812f46e3cc2b23166e\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-d569167b40" May 17 01:44:40.255895 kubelet[2638]: I0517 01:44:40.255459 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/474c560cb8f203b083232c4d35b5d361-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-n-d569167b40\" (UID: \"474c560cb8f203b083232c4d35b5d361\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-d569167b40" May 17 01:44:40.255895 kubelet[2638]: I0517 01:44:40.255502 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b2fe65cf587d99c0f358d633c7336a27-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-n-d569167b40\" (UID: \"b2fe65cf587d99c0f358d633c7336a27\") " pod="kube-system/kube-scheduler-ci-4081.3.3-n-d569167b40" May 17 01:44:40.350354 kubelet[2638]: I0517 01:44:40.350255 2638 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.3-n-d569167b40" May 17 01:44:40.351066 kubelet[2638]: E0517 01:44:40.350943 2638 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://145.40.90.165:6443/api/v1/nodes\": dial tcp 145.40.90.165:6443: connect: connection refused" node="ci-4081.3.3-n-d569167b40" May 17 01:44:40.454453 kubelet[2638]: E0517 01:44:40.454315 2638 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://145.40.90.165:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-n-d569167b40?timeout=10s\": dial tcp 145.40.90.165:6443: connect: connection refused" interval="800ms" May 17 01:44:40.519009 containerd[1819]: time="2025-05-17T01:44:40.518865831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-n-d569167b40,Uid:16e13da4700e7c812f46e3cc2b23166e,Namespace:kube-system,Attempt:0,}" May 17 01:44:40.543932 containerd[1819]: time="2025-05-17T01:44:40.543795244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-n-d569167b40,Uid:474c560cb8f203b083232c4d35b5d361,Namespace:kube-system,Attempt:0,}" May 17 01:44:40.553763 containerd[1819]: time="2025-05-17T01:44:40.553705814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-n-d569167b40,Uid:b2fe65cf587d99c0f358d633c7336a27,Namespace:kube-system,Attempt:0,}" May 17 01:44:40.756043 kubelet[2638]: I0517 01:44:40.755883 2638 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.3-n-d569167b40" May 17 01:44:40.756679 kubelet[2638]: E0517 01:44:40.756582 2638 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://145.40.90.165:6443/api/v1/nodes\": dial tcp 145.40.90.165:6443: connect: connection refused" node="ci-4081.3.3-n-d569167b40" May 17 01:44:40.937164 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1631560741.mount: Deactivated successfully. May 17 01:44:40.938763 containerd[1819]: time="2025-05-17T01:44:40.938709484Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 01:44:40.939477 containerd[1819]: time="2025-05-17T01:44:40.939435552Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 01:44:40.939902 containerd[1819]: time="2025-05-17T01:44:40.939852898Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 01:44:40.940040 containerd[1819]: time="2025-05-17T01:44:40.940000562Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 01:44:40.940267 containerd[1819]: time="2025-05-17T01:44:40.940225571Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 01:44:40.940464 containerd[1819]: time="2025-05-17T01:44:40.940416274Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 17 01:44:40.940810 containerd[1819]: time="2025-05-17T01:44:40.940766710Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 01:44:40.942965 containerd[1819]: time="2025-05-17T01:44:40.942923107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 01:44:40.943460 containerd[1819]: time="2025-05-17T01:44:40.943420371Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 399.475347ms" May 17 01:44:40.943788 containerd[1819]: time="2025-05-17T01:44:40.943746427Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 390.012064ms" May 17 01:44:40.944773 containerd[1819]: time="2025-05-17T01:44:40.944760089Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 425.742767ms" May 17 01:44:40.986140 kubelet[2638]: W0517 01:44:40.986102 2638 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://145.40.90.165:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 145.40.90.165:6443: connect: connection refused May 17 01:44:40.986243 kubelet[2638]: E0517 01:44:40.986146 2638 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://145.40.90.165:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 145.40.90.165:6443: connect: connection refused" logger="UnhandledError" May 17 01:44:40.994734 kubelet[2638]: W0517 01:44:40.994683 2638 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://145.40.90.165:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 145.40.90.165:6443: connect: connection refused May 17 01:44:40.994734 kubelet[2638]: E0517 01:44:40.994708 2638 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://145.40.90.165:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 145.40.90.165:6443: connect: connection refused" logger="UnhandledError" May 17 01:44:41.031033 containerd[1819]: time="2025-05-17T01:44:41.030924337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:44:41.031134 containerd[1819]: time="2025-05-17T01:44:41.031102825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:44:41.031156 containerd[1819]: time="2025-05-17T01:44:41.031133389Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:44:41.031156 containerd[1819]: time="2025-05-17T01:44:41.031142144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:44:41.031156 containerd[1819]: time="2025-05-17T01:44:41.031131056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:44:41.031239 containerd[1819]: time="2025-05-17T01:44:41.031155731Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:44:41.031239 containerd[1819]: time="2025-05-17T01:44:41.031163439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:44:41.031239 containerd[1819]: time="2025-05-17T01:44:41.030984137Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:44:41.031239 containerd[1819]: time="2025-05-17T01:44:41.031188975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:44:41.031239 containerd[1819]: time="2025-05-17T01:44:41.031213573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:44:41.031239 containerd[1819]: time="2025-05-17T01:44:41.031199583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:44:41.031355 containerd[1819]: time="2025-05-17T01:44:41.031279420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:44:41.046496 kubelet[2638]: W0517 01:44:41.046436 2638 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://145.40.90.165:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-n-d569167b40&limit=500&resourceVersion=0": dial tcp 145.40.90.165:6443: connect: connection refused May 17 01:44:41.046496 kubelet[2638]: E0517 01:44:41.046472 2638 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://145.40.90.165:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-n-d569167b40&limit=500&resourceVersion=0\": dial tcp 145.40.90.165:6443: connect: connection refused" logger="UnhandledError" May 17 01:44:41.059549 systemd[1]: Started cri-containerd-4541e27d7f64e277fa84d2d9b0719f76c747087b338bc8513a72a5e2c412e8cd.scope - libcontainer container 4541e27d7f64e277fa84d2d9b0719f76c747087b338bc8513a72a5e2c412e8cd. May 17 01:44:41.060366 systemd[1]: Started cri-containerd-b020d3a4e82be65962a3f114aa922408b6f43ef2bfaeaaea622aa5c158367379.scope - libcontainer container b020d3a4e82be65962a3f114aa922408b6f43ef2bfaeaaea622aa5c158367379. May 17 01:44:41.061251 systemd[1]: Started cri-containerd-fec6efdc422f6916698b37b2a1e99b8eea39e13b8b4d240208c7b56e2066aefa.scope - libcontainer container fec6efdc422f6916698b37b2a1e99b8eea39e13b8b4d240208c7b56e2066aefa. May 17 01:44:41.084753 containerd[1819]: time="2025-05-17T01:44:41.084724552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-n-d569167b40,Uid:474c560cb8f203b083232c4d35b5d361,Namespace:kube-system,Attempt:0,} returns sandbox id \"b020d3a4e82be65962a3f114aa922408b6f43ef2bfaeaaea622aa5c158367379\"" May 17 01:44:41.084846 containerd[1819]: time="2025-05-17T01:44:41.084760973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-n-d569167b40,Uid:b2fe65cf587d99c0f358d633c7336a27,Namespace:kube-system,Attempt:0,} returns sandbox id \"4541e27d7f64e277fa84d2d9b0719f76c747087b338bc8513a72a5e2c412e8cd\"" May 17 01:44:41.086964 containerd[1819]: time="2025-05-17T01:44:41.086945370Z" level=info msg="CreateContainer within sandbox \"b020d3a4e82be65962a3f114aa922408b6f43ef2bfaeaaea622aa5c158367379\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 01:44:41.086964 containerd[1819]: time="2025-05-17T01:44:41.086956231Z" level=info msg="CreateContainer within sandbox \"4541e27d7f64e277fa84d2d9b0719f76c747087b338bc8513a72a5e2c412e8cd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 01:44:41.087580 containerd[1819]: time="2025-05-17T01:44:41.087567785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-n-d569167b40,Uid:16e13da4700e7c812f46e3cc2b23166e,Namespace:kube-system,Attempt:0,} returns sandbox id \"fec6efdc422f6916698b37b2a1e99b8eea39e13b8b4d240208c7b56e2066aefa\"" May 17 01:44:41.088421 containerd[1819]: time="2025-05-17T01:44:41.088405607Z" level=info msg="CreateContainer within sandbox \"fec6efdc422f6916698b37b2a1e99b8eea39e13b8b4d240208c7b56e2066aefa\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 01:44:41.094102 containerd[1819]: time="2025-05-17T01:44:41.094056363Z" level=info msg="CreateContainer within sandbox \"4541e27d7f64e277fa84d2d9b0719f76c747087b338bc8513a72a5e2c412e8cd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1430b307695f3446a44cc899b37f8cad24f79d358cc2763ef571d5e6149dc5a2\"" May 17 01:44:41.094415 containerd[1819]: time="2025-05-17T01:44:41.094403033Z" level=info msg="StartContainer for \"1430b307695f3446a44cc899b37f8cad24f79d358cc2763ef571d5e6149dc5a2\"" May 17 01:44:41.094960 containerd[1819]: time="2025-05-17T01:44:41.094946081Z" level=info msg="CreateContainer within sandbox \"b020d3a4e82be65962a3f114aa922408b6f43ef2bfaeaaea622aa5c158367379\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a181c8a520d1c5f75e32b1c139882cf37afbcb3be3faeec45ce9dfbcb03623ee\"" May 17 01:44:41.095125 containerd[1819]: time="2025-05-17T01:44:41.095112578Z" level=info msg="StartContainer for \"a181c8a520d1c5f75e32b1c139882cf37afbcb3be3faeec45ce9dfbcb03623ee\"" May 17 01:44:41.095332 containerd[1819]: time="2025-05-17T01:44:41.095321543Z" level=info msg="CreateContainer within sandbox \"fec6efdc422f6916698b37b2a1e99b8eea39e13b8b4d240208c7b56e2066aefa\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a0311eba1241701e20021cee7829b6fd276e034e6ca5786b0813403e807a8d4b\"" May 17 01:44:41.095463 containerd[1819]: time="2025-05-17T01:44:41.095453381Z" level=info msg="StartContainer for \"a0311eba1241701e20021cee7829b6fd276e034e6ca5786b0813403e807a8d4b\"" May 17 01:44:41.113603 systemd[1]: Started cri-containerd-1430b307695f3446a44cc899b37f8cad24f79d358cc2763ef571d5e6149dc5a2.scope - libcontainer container 1430b307695f3446a44cc899b37f8cad24f79d358cc2763ef571d5e6149dc5a2. May 17 01:44:41.114235 systemd[1]: Started cri-containerd-a0311eba1241701e20021cee7829b6fd276e034e6ca5786b0813403e807a8d4b.scope - libcontainer container a0311eba1241701e20021cee7829b6fd276e034e6ca5786b0813403e807a8d4b. May 17 01:44:41.114768 systemd[1]: Started cri-containerd-a181c8a520d1c5f75e32b1c139882cf37afbcb3be3faeec45ce9dfbcb03623ee.scope - libcontainer container a181c8a520d1c5f75e32b1c139882cf37afbcb3be3faeec45ce9dfbcb03623ee. May 17 01:44:41.137349 containerd[1819]: time="2025-05-17T01:44:41.137320997Z" level=info msg="StartContainer for \"1430b307695f3446a44cc899b37f8cad24f79d358cc2763ef571d5e6149dc5a2\" returns successfully" May 17 01:44:41.137426 containerd[1819]: time="2025-05-17T01:44:41.137321344Z" level=info msg="StartContainer for \"a0311eba1241701e20021cee7829b6fd276e034e6ca5786b0813403e807a8d4b\" returns successfully" May 17 01:44:41.138630 containerd[1819]: time="2025-05-17T01:44:41.138607262Z" level=info msg="StartContainer for \"a181c8a520d1c5f75e32b1c139882cf37afbcb3be3faeec45ce9dfbcb03623ee\" returns successfully" May 17 01:44:41.558100 kubelet[2638]: I0517 01:44:41.558084 2638 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.3-n-d569167b40" May 17 01:44:41.631520 kubelet[2638]: E0517 01:44:41.630192 2638 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.3-n-d569167b40\" not found" node="ci-4081.3.3-n-d569167b40" May 17 01:44:41.741661 kubelet[2638]: I0517 01:44:41.741642 2638 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.3-n-d569167b40" May 17 01:44:41.741661 kubelet[2638]: E0517 01:44:41.741663 2638 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081.3.3-n-d569167b40\": node \"ci-4081.3.3-n-d569167b40\" not found" May 17 01:44:41.746992 kubelet[2638]: E0517 01:44:41.746976 2638 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-d569167b40\" not found" May 17 01:44:41.847620 kubelet[2638]: E0517 01:44:41.847570 2638 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-d569167b40\" not found" May 17 01:44:41.867320 kubelet[2638]: E0517 01:44:41.867300 2638 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-n-d569167b40\" not found" node="ci-4081.3.3-n-d569167b40" May 17 01:44:41.867652 kubelet[2638]: E0517 01:44:41.867638 2638 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-n-d569167b40\" not found" node="ci-4081.3.3-n-d569167b40" May 17 01:44:41.868248 kubelet[2638]: E0517 01:44:41.868239 2638 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-n-d569167b40\" not found" node="ci-4081.3.3-n-d569167b40" May 17 01:44:41.948649 kubelet[2638]: E0517 01:44:41.948549 2638 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-d569167b40\" not found" May 17 01:44:42.049515 kubelet[2638]: E0517 01:44:42.049376 2638 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-d569167b40\" not found" May 17 01:44:42.150793 kubelet[2638]: E0517 01:44:42.150564 2638 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-d569167b40\" not found" May 17 01:44:42.250733 kubelet[2638]: E0517 01:44:42.250670 2638 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-d569167b40\" not found" May 17 01:44:42.351253 kubelet[2638]: E0517 01:44:42.351141 2638 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-d569167b40\" not found" May 17 01:44:42.452759 kubelet[2638]: E0517 01:44:42.452519 2638 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-d569167b40\" not found" May 17 01:44:42.553310 kubelet[2638]: E0517 01:44:42.553212 2638 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-d569167b40\" not found" May 17 01:44:42.654318 kubelet[2638]: E0517 01:44:42.654217 2638 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-d569167b40\" not found" May 17 01:44:42.755432 kubelet[2638]: E0517 01:44:42.755226 2638 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-d569167b40\" not found" May 17 01:44:42.855603 kubelet[2638]: E0517 01:44:42.855480 2638 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-d569167b40\" not found" May 17 01:44:42.874416 kubelet[2638]: E0517 01:44:42.874334 2638 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-n-d569167b40\" not found" node="ci-4081.3.3-n-d569167b40" May 17 01:44:42.874585 kubelet[2638]: E0517 01:44:42.874497 2638 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-n-d569167b40\" not found" node="ci-4081.3.3-n-d569167b40" May 17 01:44:43.053269 kubelet[2638]: I0517 01:44:43.053056 2638 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.3-n-d569167b40" May 17 01:44:43.066022 kubelet[2638]: W0517 01:44:43.065931 2638 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 01:44:43.066219 kubelet[2638]: I0517 01:44:43.066190 2638 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.3-n-d569167b40" May 17 01:44:43.083931 kubelet[2638]: W0517 01:44:43.083830 2638 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 01:44:43.084087 kubelet[2638]: I0517 01:44:43.084000 2638 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.3-n-d569167b40" May 17 01:44:43.090326 kubelet[2638]: W0517 01:44:43.090245 2638 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 01:44:43.843156 kubelet[2638]: I0517 01:44:43.843046 2638 apiserver.go:52] "Watching apiserver" May 17 01:44:43.853748 kubelet[2638]: I0517 01:44:43.853664 2638 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 01:44:44.139533 systemd[1]: Reloading requested from client PID 2965 ('systemctl') (unit session-11.scope)... May 17 01:44:44.139541 systemd[1]: Reloading... May 17 01:44:44.173361 zram_generator::config[3004]: No configuration found. May 17 01:44:44.241143 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 01:44:44.309199 systemd[1]: Reloading finished in 169 ms. May 17 01:44:44.335107 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 17 01:44:44.335189 kubelet[2638]: I0517 01:44:44.335120 2638 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 01:44:44.365730 systemd[1]: kubelet.service: Deactivated successfully. May 17 01:44:44.365830 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 01:44:44.365852 systemd[1]: kubelet.service: Consumed 1.115s CPU time, 144.6M memory peak, 0B memory swap peak. May 17 01:44:44.383643 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 01:44:44.633705 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 01:44:44.639347 (kubelet)[3068]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 01:44:44.670843 kubelet[3068]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 01:44:44.670843 kubelet[3068]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 01:44:44.670843 kubelet[3068]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 01:44:44.671086 kubelet[3068]: I0517 01:44:44.670891 3068 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 01:44:44.674580 kubelet[3068]: I0517 01:44:44.674541 3068 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 17 01:44:44.674580 kubelet[3068]: I0517 01:44:44.674552 3068 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 01:44:44.674911 kubelet[3068]: I0517 01:44:44.674897 3068 server.go:954] "Client rotation is on, will bootstrap in background" May 17 01:44:44.675877 kubelet[3068]: I0517 01:44:44.675840 3068 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 01:44:44.677060 kubelet[3068]: I0517 01:44:44.677024 3068 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 01:44:44.678401 kubelet[3068]: E0517 01:44:44.678389 3068 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 01:44:44.678401 kubelet[3068]: I0517 01:44:44.678401 3068 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 01:44:44.685404 kubelet[3068]: I0517 01:44:44.685369 3068 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 01:44:44.685503 kubelet[3068]: I0517 01:44:44.685463 3068 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 01:44:44.685825 kubelet[3068]: I0517 01:44:44.685475 3068 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-n-d569167b40","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 01:44:44.685910 kubelet[3068]: I0517 01:44:44.685836 3068 topology_manager.go:138] "Creating topology manager with none policy" May 17 01:44:44.685910 kubelet[3068]: I0517 01:44:44.685849 3068 container_manager_linux.go:304] "Creating device plugin manager" May 17 01:44:44.685910 kubelet[3068]: I0517 01:44:44.685900 3068 state_mem.go:36] "Initialized new in-memory state store" May 17 01:44:44.686144 kubelet[3068]: I0517 01:44:44.686114 3068 kubelet.go:446] "Attempting to sync node with API server" May 17 01:44:44.686202 kubelet[3068]: I0517 01:44:44.686193 3068 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 01:44:44.686230 kubelet[3068]: I0517 01:44:44.686224 3068 kubelet.go:352] "Adding apiserver pod source" May 17 01:44:44.686249 kubelet[3068]: I0517 01:44:44.686243 3068 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 01:44:44.687004 kubelet[3068]: I0517 01:44:44.686993 3068 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 01:44:44.687386 kubelet[3068]: I0517 01:44:44.687378 3068 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 01:44:44.687698 kubelet[3068]: I0517 01:44:44.687691 3068 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 01:44:44.687718 kubelet[3068]: I0517 01:44:44.687711 3068 server.go:1287] "Started kubelet" May 17 01:44:44.687773 kubelet[3068]: I0517 01:44:44.687751 3068 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 17 01:44:44.687809 kubelet[3068]: I0517 01:44:44.687776 3068 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 01:44:44.687939 kubelet[3068]: I0517 01:44:44.687931 3068 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 01:44:44.688471 kubelet[3068]: I0517 01:44:44.688464 3068 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 01:44:44.688529 kubelet[3068]: E0517 01:44:44.688513 3068 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.3-n-d569167b40\" not found" May 17 01:44:44.688529 kubelet[3068]: I0517 01:44:44.688525 3068 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 01:44:44.688579 kubelet[3068]: I0517 01:44:44.688549 3068 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 01:44:44.688654 kubelet[3068]: I0517 01:44:44.688643 3068 reconciler.go:26] "Reconciler: start to sync state" May 17 01:44:44.688687 kubelet[3068]: I0517 01:44:44.688627 3068 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 01:44:44.688687 kubelet[3068]: E0517 01:44:44.688666 3068 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 01:44:44.689148 kubelet[3068]: I0517 01:44:44.689133 3068 factory.go:221] Registration of the systemd container factory successfully May 17 01:44:44.689438 kubelet[3068]: I0517 01:44:44.689373 3068 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 01:44:44.689699 kubelet[3068]: I0517 01:44:44.689688 3068 server.go:479] "Adding debug handlers to kubelet server" May 17 01:44:44.690820 kubelet[3068]: I0517 01:44:44.690809 3068 factory.go:221] Registration of the containerd container factory successfully May 17 01:44:44.693874 kubelet[3068]: I0517 01:44:44.693856 3068 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 01:44:44.694443 kubelet[3068]: I0517 01:44:44.694435 3068 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 01:44:44.694472 kubelet[3068]: I0517 01:44:44.694449 3068 status_manager.go:227] "Starting to sync pod status with apiserver" May 17 01:44:44.694472 kubelet[3068]: I0517 01:44:44.694466 3068 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 01:44:44.694510 kubelet[3068]: I0517 01:44:44.694472 3068 kubelet.go:2382] "Starting kubelet main sync loop" May 17 01:44:44.694510 kubelet[3068]: E0517 01:44:44.694503 3068 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 01:44:44.704570 kubelet[3068]: I0517 01:44:44.704517 3068 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 01:44:44.704570 kubelet[3068]: I0517 01:44:44.704528 3068 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 01:44:44.704570 kubelet[3068]: I0517 01:44:44.704539 3068 state_mem.go:36] "Initialized new in-memory state store" May 17 01:44:44.704678 kubelet[3068]: I0517 01:44:44.704626 3068 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 01:44:44.704678 kubelet[3068]: I0517 01:44:44.704633 3068 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 01:44:44.704678 kubelet[3068]: I0517 01:44:44.704644 3068 policy_none.go:49] "None policy: Start" May 17 01:44:44.704678 kubelet[3068]: I0517 01:44:44.704649 3068 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 01:44:44.704678 kubelet[3068]: I0517 01:44:44.704655 3068 state_mem.go:35] "Initializing new in-memory state store" May 17 01:44:44.704750 kubelet[3068]: I0517 01:44:44.704714 3068 state_mem.go:75] "Updated machine memory state" May 17 01:44:44.706580 kubelet[3068]: I0517 01:44:44.706543 3068 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 01:44:44.706667 kubelet[3068]: I0517 01:44:44.706626 3068 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 01:44:44.706667 kubelet[3068]: I0517 01:44:44.706633 3068 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 01:44:44.706773 kubelet[3068]: I0517 01:44:44.706736 3068 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 01:44:44.707088 kubelet[3068]: E0517 01:44:44.707076 3068 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 01:44:44.795917 kubelet[3068]: I0517 01:44:44.795852 3068 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.3-n-d569167b40" May 17 01:44:44.796345 kubelet[3068]: I0517 01:44:44.795994 3068 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.3-n-d569167b40" May 17 01:44:44.796345 kubelet[3068]: I0517 01:44:44.796121 3068 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.3-n-d569167b40" May 17 01:44:44.814173 kubelet[3068]: I0517 01:44:44.814087 3068 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.3-n-d569167b40" May 17 01:44:44.815155 kubelet[3068]: W0517 01:44:44.815089 3068 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 01:44:44.815417 kubelet[3068]: E0517 01:44:44.815257 3068 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.3-n-d569167b40\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.3-n-d569167b40" May 17 01:44:44.815597 kubelet[3068]: W0517 01:44:44.815485 3068 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 01:44:44.815597 kubelet[3068]: W0517 01:44:44.815504 3068 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 01:44:44.815983 kubelet[3068]: E0517 01:44:44.815634 3068 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.3-n-d569167b40\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.3-n-d569167b40" May 17 01:44:44.815983 kubelet[3068]: E0517 01:44:44.815646 3068 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.3-n-d569167b40\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.3-n-d569167b40" May 17 01:44:44.823660 kubelet[3068]: I0517 01:44:44.823572 3068 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.3-n-d569167b40" May 17 01:44:44.823822 kubelet[3068]: I0517 01:44:44.823708 3068 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.3-n-d569167b40" May 17 01:44:44.991092 kubelet[3068]: I0517 01:44:44.990828 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/474c560cb8f203b083232c4d35b5d361-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-n-d569167b40\" (UID: \"474c560cb8f203b083232c4d35b5d361\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-d569167b40" May 17 01:44:44.991092 kubelet[3068]: I0517 01:44:44.990955 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/474c560cb8f203b083232c4d35b5d361-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-n-d569167b40\" (UID: \"474c560cb8f203b083232c4d35b5d361\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-d569167b40" May 17 01:44:44.991092 kubelet[3068]: I0517 01:44:44.991063 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b2fe65cf587d99c0f358d633c7336a27-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-n-d569167b40\" (UID: \"b2fe65cf587d99c0f358d633c7336a27\") " pod="kube-system/kube-scheduler-ci-4081.3.3-n-d569167b40" May 17 01:44:44.991564 kubelet[3068]: I0517 01:44:44.991136 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/16e13da4700e7c812f46e3cc2b23166e-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-n-d569167b40\" (UID: \"16e13da4700e7c812f46e3cc2b23166e\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-d569167b40" May 17 01:44:44.991564 kubelet[3068]: I0517 01:44:44.991218 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/474c560cb8f203b083232c4d35b5d361-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-n-d569167b40\" (UID: \"474c560cb8f203b083232c4d35b5d361\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-d569167b40" May 17 01:44:44.991564 kubelet[3068]: I0517 01:44:44.991366 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/474c560cb8f203b083232c4d35b5d361-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-n-d569167b40\" (UID: \"474c560cb8f203b083232c4d35b5d361\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-d569167b40" May 17 01:44:44.991564 kubelet[3068]: I0517 01:44:44.991464 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/474c560cb8f203b083232c4d35b5d361-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-n-d569167b40\" (UID: \"474c560cb8f203b083232c4d35b5d361\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-n-d569167b40" May 17 01:44:44.991564 kubelet[3068]: I0517 01:44:44.991534 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/16e13da4700e7c812f46e3cc2b23166e-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-n-d569167b40\" (UID: \"16e13da4700e7c812f46e3cc2b23166e\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-d569167b40" May 17 01:44:44.992168 kubelet[3068]: I0517 01:44:44.991602 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/16e13da4700e7c812f46e3cc2b23166e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-n-d569167b40\" (UID: \"16e13da4700e7c812f46e3cc2b23166e\") " pod="kube-system/kube-apiserver-ci-4081.3.3-n-d569167b40" May 17 01:44:45.686469 kubelet[3068]: I0517 01:44:45.686453 3068 apiserver.go:52] "Watching apiserver" May 17 01:44:45.688749 kubelet[3068]: I0517 01:44:45.688741 3068 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 01:44:45.698086 kubelet[3068]: I0517 01:44:45.698056 3068 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.3-n-d569167b40" May 17 01:44:45.698179 kubelet[3068]: I0517 01:44:45.698129 3068 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.3-n-d569167b40" May 17 01:44:45.701926 kubelet[3068]: W0517 01:44:45.701874 3068 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 01:44:45.701995 kubelet[3068]: E0517 01:44:45.701934 3068 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.3-n-d569167b40\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.3-n-d569167b40" May 17 01:44:45.702211 kubelet[3068]: W0517 01:44:45.702203 3068 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 01:44:45.702240 kubelet[3068]: E0517 01:44:45.702219 3068 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.3-n-d569167b40\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.3-n-d569167b40" May 17 01:44:45.709338 kubelet[3068]: I0517 01:44:45.709311 3068 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.3-n-d569167b40" podStartSLOduration=2.709294258 podStartE2EDuration="2.709294258s" podCreationTimestamp="2025-05-17 01:44:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 01:44:45.709266979 +0000 UTC m=+1.065720653" watchObservedRunningTime="2025-05-17 01:44:45.709294258 +0000 UTC m=+1.065747932" May 17 01:44:45.717516 kubelet[3068]: I0517 01:44:45.717493 3068 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.3-n-d569167b40" podStartSLOduration=2.717484975 podStartE2EDuration="2.717484975s" podCreationTimestamp="2025-05-17 01:44:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 01:44:45.71361726 +0000 UTC m=+1.070070936" watchObservedRunningTime="2025-05-17 01:44:45.717484975 +0000 UTC m=+1.073938650" May 17 01:44:45.721947 kubelet[3068]: I0517 01:44:45.721892 3068 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.3-n-d569167b40" podStartSLOduration=2.721885503 podStartE2EDuration="2.721885503s" podCreationTimestamp="2025-05-17 01:44:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 01:44:45.717575085 +0000 UTC m=+1.074028760" watchObservedRunningTime="2025-05-17 01:44:45.721885503 +0000 UTC m=+1.078339172" May 17 01:44:50.011992 kubelet[3068]: I0517 01:44:50.011915 3068 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 01:44:50.013295 kubelet[3068]: I0517 01:44:50.013157 3068 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 01:44:50.013540 containerd[1819]: time="2025-05-17T01:44:50.012672362Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 01:44:50.709563 systemd[1]: Created slice kubepods-besteffort-pod3e931916_58d8_44ac_bbf2_fb2e174b4361.slice - libcontainer container kubepods-besteffort-pod3e931916_58d8_44ac_bbf2_fb2e174b4361.slice. May 17 01:44:50.732455 kubelet[3068]: I0517 01:44:50.732403 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e931916-58d8-44ac-bbf2-fb2e174b4361-lib-modules\") pod \"kube-proxy-ct4mj\" (UID: \"3e931916-58d8-44ac-bbf2-fb2e174b4361\") " pod="kube-system/kube-proxy-ct4mj" May 17 01:44:50.732665 kubelet[3068]: I0517 01:44:50.732469 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgs6t\" (UniqueName: \"kubernetes.io/projected/3e931916-58d8-44ac-bbf2-fb2e174b4361-kube-api-access-vgs6t\") pod \"kube-proxy-ct4mj\" (UID: \"3e931916-58d8-44ac-bbf2-fb2e174b4361\") " pod="kube-system/kube-proxy-ct4mj" May 17 01:44:50.732665 kubelet[3068]: I0517 01:44:50.732515 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3e931916-58d8-44ac-bbf2-fb2e174b4361-kube-proxy\") pod \"kube-proxy-ct4mj\" (UID: \"3e931916-58d8-44ac-bbf2-fb2e174b4361\") " pod="kube-system/kube-proxy-ct4mj" May 17 01:44:50.732665 kubelet[3068]: I0517 01:44:50.732548 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e931916-58d8-44ac-bbf2-fb2e174b4361-xtables-lock\") pod \"kube-proxy-ct4mj\" (UID: \"3e931916-58d8-44ac-bbf2-fb2e174b4361\") " pod="kube-system/kube-proxy-ct4mj" May 17 01:44:50.845148 kubelet[3068]: E0517 01:44:50.845039 3068 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 17 01:44:50.845148 kubelet[3068]: E0517 01:44:50.845102 3068 projected.go:194] Error preparing data for projected volume kube-api-access-vgs6t for pod kube-system/kube-proxy-ct4mj: configmap "kube-root-ca.crt" not found May 17 01:44:50.845586 kubelet[3068]: E0517 01:44:50.845242 3068 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3e931916-58d8-44ac-bbf2-fb2e174b4361-kube-api-access-vgs6t podName:3e931916-58d8-44ac-bbf2-fb2e174b4361 nodeName:}" failed. No retries permitted until 2025-05-17 01:44:51.34519305 +0000 UTC m=+6.701646791 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vgs6t" (UniqueName: "kubernetes.io/projected/3e931916-58d8-44ac-bbf2-fb2e174b4361-kube-api-access-vgs6t") pod "kube-proxy-ct4mj" (UID: "3e931916-58d8-44ac-bbf2-fb2e174b4361") : configmap "kube-root-ca.crt" not found May 17 01:44:51.083100 systemd[1]: Created slice kubepods-besteffort-podc291e51b_e797_4279_a151_82b1214e48d6.slice - libcontainer container kubepods-besteffort-podc291e51b_e797_4279_a151_82b1214e48d6.slice. May 17 01:44:51.136167 kubelet[3068]: I0517 01:44:51.136087 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c291e51b-e797-4279-a151-82b1214e48d6-var-lib-calico\") pod \"tigera-operator-844669ff44-2xnzm\" (UID: \"c291e51b-e797-4279-a151-82b1214e48d6\") " pod="tigera-operator/tigera-operator-844669ff44-2xnzm" May 17 01:44:51.137043 kubelet[3068]: I0517 01:44:51.136184 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw2wv\" (UniqueName: \"kubernetes.io/projected/c291e51b-e797-4279-a151-82b1214e48d6-kube-api-access-lw2wv\") pod \"tigera-operator-844669ff44-2xnzm\" (UID: \"c291e51b-e797-4279-a151-82b1214e48d6\") " pod="tigera-operator/tigera-operator-844669ff44-2xnzm" May 17 01:44:51.390035 containerd[1819]: time="2025-05-17T01:44:51.389795572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-2xnzm,Uid:c291e51b-e797-4279-a151-82b1214e48d6,Namespace:tigera-operator,Attempt:0,}" May 17 01:44:51.629665 containerd[1819]: time="2025-05-17T01:44:51.629598285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ct4mj,Uid:3e931916-58d8-44ac-bbf2-fb2e174b4361,Namespace:kube-system,Attempt:0,}" May 17 01:44:51.811096 containerd[1819]: time="2025-05-17T01:44:51.810965889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:44:51.811096 containerd[1819]: time="2025-05-17T01:44:51.811022167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:44:51.811096 containerd[1819]: time="2025-05-17T01:44:51.811037522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:44:51.811293 containerd[1819]: time="2025-05-17T01:44:51.811118655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:44:51.837742 systemd[1]: Started cri-containerd-89b287a09f41293050392a1330b5bb784e3fcf50866933c2e1175b2ecfde3ed4.scope - libcontainer container 89b287a09f41293050392a1330b5bb784e3fcf50866933c2e1175b2ecfde3ed4. May 17 01:44:51.894092 containerd[1819]: time="2025-05-17T01:44:51.894040113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-2xnzm,Uid:c291e51b-e797-4279-a151-82b1214e48d6,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"89b287a09f41293050392a1330b5bb784e3fcf50866933c2e1175b2ecfde3ed4\"" May 17 01:44:51.894876 containerd[1819]: time="2025-05-17T01:44:51.894858355Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 17 01:44:52.060715 containerd[1819]: time="2025-05-17T01:44:52.059686262Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:44:52.060715 containerd[1819]: time="2025-05-17T01:44:52.060644317Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:44:52.060715 containerd[1819]: time="2025-05-17T01:44:52.060681524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:44:52.061092 containerd[1819]: time="2025-05-17T01:44:52.060867005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:44:52.097851 systemd[1]: Started cri-containerd-fa0af394637fa4020f3ddec2efeee24656ffe17ea0bfe5691816acf00c5d03ff.scope - libcontainer container fa0af394637fa4020f3ddec2efeee24656ffe17ea0bfe5691816acf00c5d03ff. May 17 01:44:52.145890 containerd[1819]: time="2025-05-17T01:44:52.145801990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ct4mj,Uid:3e931916-58d8-44ac-bbf2-fb2e174b4361,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa0af394637fa4020f3ddec2efeee24656ffe17ea0bfe5691816acf00c5d03ff\"" May 17 01:44:52.151579 containerd[1819]: time="2025-05-17T01:44:52.151446645Z" level=info msg="CreateContainer within sandbox \"fa0af394637fa4020f3ddec2efeee24656ffe17ea0bfe5691816acf00c5d03ff\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 01:44:52.158233 containerd[1819]: time="2025-05-17T01:44:52.158190306Z" level=info msg="CreateContainer within sandbox \"fa0af394637fa4020f3ddec2efeee24656ffe17ea0bfe5691816acf00c5d03ff\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a16df0155e74f7d50d20d3da900874e6f37cbf4554e3b07878ef6942d8e64c1e\"" May 17 01:44:52.158485 containerd[1819]: time="2025-05-17T01:44:52.158469002Z" level=info msg="StartContainer for \"a16df0155e74f7d50d20d3da900874e6f37cbf4554e3b07878ef6942d8e64c1e\"" May 17 01:44:52.189574 systemd[1]: Started cri-containerd-a16df0155e74f7d50d20d3da900874e6f37cbf4554e3b07878ef6942d8e64c1e.scope - libcontainer container a16df0155e74f7d50d20d3da900874e6f37cbf4554e3b07878ef6942d8e64c1e. May 17 01:44:52.206746 containerd[1819]: time="2025-05-17T01:44:52.206721297Z" level=info msg="StartContainer for \"a16df0155e74f7d50d20d3da900874e6f37cbf4554e3b07878ef6942d8e64c1e\" returns successfully" May 17 01:44:52.734812 kubelet[3068]: I0517 01:44:52.734650 3068 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ct4mj" podStartSLOduration=2.734597735 podStartE2EDuration="2.734597735s" podCreationTimestamp="2025-05-17 01:44:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 01:44:52.734391704 +0000 UTC m=+8.090845487" watchObservedRunningTime="2025-05-17 01:44:52.734597735 +0000 UTC m=+8.091051454" May 17 01:44:53.344891 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1517362208.mount: Deactivated successfully. May 17 01:44:53.594822 containerd[1819]: time="2025-05-17T01:44:53.594764429Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:44:53.595049 containerd[1819]: time="2025-05-17T01:44:53.594967226Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=25055451" May 17 01:44:53.595347 containerd[1819]: time="2025-05-17T01:44:53.595303027Z" level=info msg="ImageCreate event name:\"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:44:53.596984 containerd[1819]: time="2025-05-17T01:44:53.596943275Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:44:53.597282 containerd[1819]: time="2025-05-17T01:44:53.597240826Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"25051446\" in 1.702365359s" May 17 01:44:53.597282 containerd[1819]: time="2025-05-17T01:44:53.597258342Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 17 01:44:53.598193 containerd[1819]: time="2025-05-17T01:44:53.598180929Z" level=info msg="CreateContainer within sandbox \"89b287a09f41293050392a1330b5bb784e3fcf50866933c2e1175b2ecfde3ed4\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 17 01:44:53.602050 containerd[1819]: time="2025-05-17T01:44:53.602007399Z" level=info msg="CreateContainer within sandbox \"89b287a09f41293050392a1330b5bb784e3fcf50866933c2e1175b2ecfde3ed4\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"09d3022b0dbb1d3edb5c7f16febf44c5eaf7eabc900dac85d19703b861700a83\"" May 17 01:44:53.602248 containerd[1819]: time="2025-05-17T01:44:53.602236290Z" level=info msg="StartContainer for \"09d3022b0dbb1d3edb5c7f16febf44c5eaf7eabc900dac85d19703b861700a83\"" May 17 01:44:53.628466 systemd[1]: Started cri-containerd-09d3022b0dbb1d3edb5c7f16febf44c5eaf7eabc900dac85d19703b861700a83.scope - libcontainer container 09d3022b0dbb1d3edb5c7f16febf44c5eaf7eabc900dac85d19703b861700a83. May 17 01:44:53.639590 containerd[1819]: time="2025-05-17T01:44:53.639530686Z" level=info msg="StartContainer for \"09d3022b0dbb1d3edb5c7f16febf44c5eaf7eabc900dac85d19703b861700a83\" returns successfully" May 17 01:44:53.739341 kubelet[3068]: I0517 01:44:53.739183 3068 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-844669ff44-2xnzm" podStartSLOduration=1.03617263 podStartE2EDuration="2.739141811s" podCreationTimestamp="2025-05-17 01:44:51 +0000 UTC" firstStartedPulling="2025-05-17 01:44:51.894675735 +0000 UTC m=+7.251129404" lastFinishedPulling="2025-05-17 01:44:53.597644915 +0000 UTC m=+8.954098585" observedRunningTime="2025-05-17 01:44:53.738875483 +0000 UTC m=+9.095329246" watchObservedRunningTime="2025-05-17 01:44:53.739141811 +0000 UTC m=+9.095595537" May 17 01:44:57.892432 sudo[2089]: pam_unix(sudo:session): session closed for user root May 17 01:44:57.893398 sshd[2086]: pam_unix(sshd:session): session closed for user core May 17 01:44:57.895937 systemd[1]: sshd@8-145.40.90.165:22-147.75.109.163:53882.service: Deactivated successfully. May 17 01:44:57.897162 systemd[1]: session-11.scope: Deactivated successfully. May 17 01:44:57.897473 systemd[1]: session-11.scope: Consumed 3.390s CPU time, 161.2M memory peak, 0B memory swap peak. May 17 01:44:57.897779 systemd-logind[1800]: Session 11 logged out. Waiting for processes to exit. May 17 01:44:57.898284 systemd-logind[1800]: Removed session 11. May 17 01:44:58.180416 update_engine[1805]: I20250517 01:44:58.180332 1805 update_attempter.cc:509] Updating boot flags... May 17 01:44:58.208284 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 38 scanned by (udev-worker) (3600) May 17 01:44:58.237287 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 38 scanned by (udev-worker) (3596) May 17 01:44:58.759526 systemd[1]: Started sshd@9-145.40.90.165:22-218.92.0.157:59659.service - OpenSSH per-connection server daemon (218.92.0.157:59659). May 17 01:44:59.864871 sshd[3612]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root May 17 01:44:59.943280 systemd[1]: Created slice kubepods-besteffort-pod14ba1d3d_eed0_4587_a040_0d57064c2b7c.slice - libcontainer container kubepods-besteffort-pod14ba1d3d_eed0_4587_a040_0d57064c2b7c.slice. May 17 01:45:00.004543 kubelet[3068]: I0517 01:45:00.004475 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14ba1d3d-eed0-4587-a040-0d57064c2b7c-tigera-ca-bundle\") pod \"calico-typha-dbfc59c-95cz5\" (UID: \"14ba1d3d-eed0-4587-a040-0d57064c2b7c\") " pod="calico-system/calico-typha-dbfc59c-95cz5" May 17 01:45:00.005600 kubelet[3068]: I0517 01:45:00.004588 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/14ba1d3d-eed0-4587-a040-0d57064c2b7c-typha-certs\") pod \"calico-typha-dbfc59c-95cz5\" (UID: \"14ba1d3d-eed0-4587-a040-0d57064c2b7c\") " pod="calico-system/calico-typha-dbfc59c-95cz5" May 17 01:45:00.005600 kubelet[3068]: I0517 01:45:00.004685 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fmw8\" (UniqueName: \"kubernetes.io/projected/14ba1d3d-eed0-4587-a040-0d57064c2b7c-kube-api-access-7fmw8\") pod \"calico-typha-dbfc59c-95cz5\" (UID: \"14ba1d3d-eed0-4587-a040-0d57064c2b7c\") " pod="calico-system/calico-typha-dbfc59c-95cz5" May 17 01:45:00.246860 containerd[1819]: time="2025-05-17T01:45:00.246615420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-dbfc59c-95cz5,Uid:14ba1d3d-eed0-4587-a040-0d57064c2b7c,Namespace:calico-system,Attempt:0,}" May 17 01:45:00.257301 containerd[1819]: time="2025-05-17T01:45:00.257251256Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:45:00.257375 containerd[1819]: time="2025-05-17T01:45:00.257298929Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:45:00.257375 containerd[1819]: time="2025-05-17T01:45:00.257310611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:45:00.257375 containerd[1819]: time="2025-05-17T01:45:00.257356603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:45:00.284788 systemd[1]: Started cri-containerd-8ac35e647b0ca915781d678f8878638a792298036c8abf8e02489b74ad1bf397.scope - libcontainer container 8ac35e647b0ca915781d678f8878638a792298036c8abf8e02489b74ad1bf397. May 17 01:45:00.355890 containerd[1819]: time="2025-05-17T01:45:00.355863245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-dbfc59c-95cz5,Uid:14ba1d3d-eed0-4587-a040-0d57064c2b7c,Namespace:calico-system,Attempt:0,} returns sandbox id \"8ac35e647b0ca915781d678f8878638a792298036c8abf8e02489b74ad1bf397\"" May 17 01:45:00.356706 containerd[1819]: time="2025-05-17T01:45:00.356689380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 17 01:45:00.440846 systemd[1]: Created slice kubepods-besteffort-podfc92564f_ae80_4be5_aa41_071d67f21fcf.slice - libcontainer container kubepods-besteffort-podfc92564f_ae80_4be5_aa41_071d67f21fcf.slice. May 17 01:45:00.508888 kubelet[3068]: I0517 01:45:00.508645 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc92564f-ae80-4be5-aa41-071d67f21fcf-lib-modules\") pod \"calico-node-87d8h\" (UID: \"fc92564f-ae80-4be5-aa41-071d67f21fcf\") " pod="calico-system/calico-node-87d8h" May 17 01:45:00.508888 kubelet[3068]: I0517 01:45:00.508750 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fc92564f-ae80-4be5-aa41-071d67f21fcf-cni-net-dir\") pod \"calico-node-87d8h\" (UID: \"fc92564f-ae80-4be5-aa41-071d67f21fcf\") " pod="calico-system/calico-node-87d8h" May 17 01:45:00.508888 kubelet[3068]: I0517 01:45:00.508805 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fc92564f-ae80-4be5-aa41-071d67f21fcf-var-run-calico\") pod \"calico-node-87d8h\" (UID: \"fc92564f-ae80-4be5-aa41-071d67f21fcf\") " pod="calico-system/calico-node-87d8h" May 17 01:45:00.508888 kubelet[3068]: I0517 01:45:00.508852 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l89rx\" (UniqueName: \"kubernetes.io/projected/fc92564f-ae80-4be5-aa41-071d67f21fcf-kube-api-access-l89rx\") pod \"calico-node-87d8h\" (UID: \"fc92564f-ae80-4be5-aa41-071d67f21fcf\") " pod="calico-system/calico-node-87d8h" May 17 01:45:00.509580 kubelet[3068]: I0517 01:45:00.508911 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fc92564f-ae80-4be5-aa41-071d67f21fcf-var-lib-calico\") pod \"calico-node-87d8h\" (UID: \"fc92564f-ae80-4be5-aa41-071d67f21fcf\") " pod="calico-system/calico-node-87d8h" May 17 01:45:00.509580 kubelet[3068]: I0517 01:45:00.508957 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fc92564f-ae80-4be5-aa41-071d67f21fcf-node-certs\") pod \"calico-node-87d8h\" (UID: \"fc92564f-ae80-4be5-aa41-071d67f21fcf\") " pod="calico-system/calico-node-87d8h" May 17 01:45:00.509580 kubelet[3068]: I0517 01:45:00.509000 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fc92564f-ae80-4be5-aa41-071d67f21fcf-cni-bin-dir\") pod \"calico-node-87d8h\" (UID: \"fc92564f-ae80-4be5-aa41-071d67f21fcf\") " pod="calico-system/calico-node-87d8h" May 17 01:45:00.509580 kubelet[3068]: I0517 01:45:00.509043 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fc92564f-ae80-4be5-aa41-071d67f21fcf-cni-log-dir\") pod \"calico-node-87d8h\" (UID: \"fc92564f-ae80-4be5-aa41-071d67f21fcf\") " pod="calico-system/calico-node-87d8h" May 17 01:45:00.509580 kubelet[3068]: I0517 01:45:00.509084 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc92564f-ae80-4be5-aa41-071d67f21fcf-tigera-ca-bundle\") pod \"calico-node-87d8h\" (UID: \"fc92564f-ae80-4be5-aa41-071d67f21fcf\") " pod="calico-system/calico-node-87d8h" May 17 01:45:00.509997 kubelet[3068]: I0517 01:45:00.509130 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fc92564f-ae80-4be5-aa41-071d67f21fcf-policysync\") pod \"calico-node-87d8h\" (UID: \"fc92564f-ae80-4be5-aa41-071d67f21fcf\") " pod="calico-system/calico-node-87d8h" May 17 01:45:00.509997 kubelet[3068]: I0517 01:45:00.509175 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc92564f-ae80-4be5-aa41-071d67f21fcf-xtables-lock\") pod \"calico-node-87d8h\" (UID: \"fc92564f-ae80-4be5-aa41-071d67f21fcf\") " pod="calico-system/calico-node-87d8h" May 17 01:45:00.509997 kubelet[3068]: I0517 01:45:00.509347 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fc92564f-ae80-4be5-aa41-071d67f21fcf-flexvol-driver-host\") pod \"calico-node-87d8h\" (UID: \"fc92564f-ae80-4be5-aa41-071d67f21fcf\") " pod="calico-system/calico-node-87d8h" May 17 01:45:00.611582 kubelet[3068]: E0517 01:45:00.611514 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.611582 kubelet[3068]: W0517 01:45:00.611563 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.612044 kubelet[3068]: E0517 01:45:00.611662 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.612425 kubelet[3068]: E0517 01:45:00.612364 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.612425 kubelet[3068]: W0517 01:45:00.612416 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.612800 kubelet[3068]: E0517 01:45:00.612455 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.613379 kubelet[3068]: E0517 01:45:00.613324 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.613379 kubelet[3068]: W0517 01:45:00.613363 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.613742 kubelet[3068]: E0517 01:45:00.613399 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.617708 kubelet[3068]: E0517 01:45:00.617629 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.617708 kubelet[3068]: W0517 01:45:00.617667 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.617708 kubelet[3068]: E0517 01:45:00.617699 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.629295 kubelet[3068]: E0517 01:45:00.629187 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.629295 kubelet[3068]: W0517 01:45:00.629225 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.629295 kubelet[3068]: E0517 01:45:00.629259 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.683450 kubelet[3068]: E0517 01:45:00.683420 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k4dvh" podUID="64a6b624-b75c-46f6-8f62-c89636ac29be" May 17 01:45:00.688967 kubelet[3068]: E0517 01:45:00.688947 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.688967 kubelet[3068]: W0517 01:45:00.688964 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.689100 kubelet[3068]: E0517 01:45:00.688979 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.689135 kubelet[3068]: E0517 01:45:00.689122 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.689135 kubelet[3068]: W0517 01:45:00.689129 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.689194 kubelet[3068]: E0517 01:45:00.689137 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.689257 kubelet[3068]: E0517 01:45:00.689249 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.689298 kubelet[3068]: W0517 01:45:00.689257 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.689298 kubelet[3068]: E0517 01:45:00.689264 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.689455 kubelet[3068]: E0517 01:45:00.689445 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.689492 kubelet[3068]: W0517 01:45:00.689456 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.689492 kubelet[3068]: E0517 01:45:00.689470 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.689632 kubelet[3068]: E0517 01:45:00.689624 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.689662 kubelet[3068]: W0517 01:45:00.689632 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.689662 kubelet[3068]: E0517 01:45:00.689640 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.689771 kubelet[3068]: E0517 01:45:00.689765 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.689799 kubelet[3068]: W0517 01:45:00.689772 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.689799 kubelet[3068]: E0517 01:45:00.689779 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.689918 kubelet[3068]: E0517 01:45:00.689911 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.689946 kubelet[3068]: W0517 01:45:00.689918 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.689946 kubelet[3068]: E0517 01:45:00.689925 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.690032 kubelet[3068]: E0517 01:45:00.690026 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.690060 kubelet[3068]: W0517 01:45:00.690033 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.690060 kubelet[3068]: E0517 01:45:00.690040 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.690154 kubelet[3068]: E0517 01:45:00.690147 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.690182 kubelet[3068]: W0517 01:45:00.690154 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.690182 kubelet[3068]: E0517 01:45:00.690161 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.690261 kubelet[3068]: E0517 01:45:00.690255 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.690294 kubelet[3068]: W0517 01:45:00.690261 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.690294 kubelet[3068]: E0517 01:45:00.690268 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.690420 kubelet[3068]: E0517 01:45:00.690413 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.690450 kubelet[3068]: W0517 01:45:00.690420 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.690450 kubelet[3068]: E0517 01:45:00.690427 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.690571 kubelet[3068]: E0517 01:45:00.690561 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.690571 kubelet[3068]: W0517 01:45:00.690568 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.690666 kubelet[3068]: E0517 01:45:00.690574 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.690711 kubelet[3068]: E0517 01:45:00.690702 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.690749 kubelet[3068]: W0517 01:45:00.690711 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.690749 kubelet[3068]: E0517 01:45:00.690719 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.690858 kubelet[3068]: E0517 01:45:00.690850 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.690885 kubelet[3068]: W0517 01:45:00.690859 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.690885 kubelet[3068]: E0517 01:45:00.690867 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.690984 kubelet[3068]: E0517 01:45:00.690977 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.691013 kubelet[3068]: W0517 01:45:00.690985 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.691013 kubelet[3068]: E0517 01:45:00.690991 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.691112 kubelet[3068]: E0517 01:45:00.691105 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.691140 kubelet[3068]: W0517 01:45:00.691112 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.691140 kubelet[3068]: E0517 01:45:00.691118 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.691241 kubelet[3068]: E0517 01:45:00.691234 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.691266 kubelet[3068]: W0517 01:45:00.691241 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.691266 kubelet[3068]: E0517 01:45:00.691247 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.691406 kubelet[3068]: E0517 01:45:00.691399 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.691435 kubelet[3068]: W0517 01:45:00.691406 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.691435 kubelet[3068]: E0517 01:45:00.691413 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.691521 kubelet[3068]: E0517 01:45:00.691515 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.691552 kubelet[3068]: W0517 01:45:00.691522 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.691552 kubelet[3068]: E0517 01:45:00.691528 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.691645 kubelet[3068]: E0517 01:45:00.691639 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.691673 kubelet[3068]: W0517 01:45:00.691645 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.691673 kubelet[3068]: E0517 01:45:00.691652 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.711299 kubelet[3068]: E0517 01:45:00.711266 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.711299 kubelet[3068]: W0517 01:45:00.711296 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.711480 kubelet[3068]: E0517 01:45:00.711315 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.711480 kubelet[3068]: I0517 01:45:00.711344 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/64a6b624-b75c-46f6-8f62-c89636ac29be-registration-dir\") pod \"csi-node-driver-k4dvh\" (UID: \"64a6b624-b75c-46f6-8f62-c89636ac29be\") " pod="calico-system/csi-node-driver-k4dvh" May 17 01:45:00.711627 kubelet[3068]: E0517 01:45:00.711606 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.711627 kubelet[3068]: W0517 01:45:00.711625 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.711795 kubelet[3068]: E0517 01:45:00.711643 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.711795 kubelet[3068]: I0517 01:45:00.711666 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkg7s\" (UniqueName: \"kubernetes.io/projected/64a6b624-b75c-46f6-8f62-c89636ac29be-kube-api-access-hkg7s\") pod \"csi-node-driver-k4dvh\" (UID: \"64a6b624-b75c-46f6-8f62-c89636ac29be\") " pod="calico-system/csi-node-driver-k4dvh" May 17 01:45:00.712027 kubelet[3068]: E0517 01:45:00.712000 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.712027 kubelet[3068]: W0517 01:45:00.712023 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.712197 kubelet[3068]: E0517 01:45:00.712052 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.712332 kubelet[3068]: E0517 01:45:00.712314 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.712332 kubelet[3068]: W0517 01:45:00.712329 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.712491 kubelet[3068]: E0517 01:45:00.712350 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.712597 kubelet[3068]: E0517 01:45:00.712579 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.712597 kubelet[3068]: W0517 01:45:00.712593 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.712740 kubelet[3068]: E0517 01:45:00.712614 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.712740 kubelet[3068]: I0517 01:45:00.712652 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/64a6b624-b75c-46f6-8f62-c89636ac29be-kubelet-dir\") pod \"csi-node-driver-k4dvh\" (UID: \"64a6b624-b75c-46f6-8f62-c89636ac29be\") " pod="calico-system/csi-node-driver-k4dvh" May 17 01:45:00.712890 kubelet[3068]: E0517 01:45:00.712874 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.712937 kubelet[3068]: W0517 01:45:00.712890 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.712937 kubelet[3068]: E0517 01:45:00.712910 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.713105 kubelet[3068]: E0517 01:45:00.713092 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.713171 kubelet[3068]: W0517 01:45:00.713104 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.713171 kubelet[3068]: E0517 01:45:00.713118 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.713454 kubelet[3068]: E0517 01:45:00.713438 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.713511 kubelet[3068]: W0517 01:45:00.713455 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.713511 kubelet[3068]: E0517 01:45:00.713472 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.713511 kubelet[3068]: I0517 01:45:00.713497 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/64a6b624-b75c-46f6-8f62-c89636ac29be-socket-dir\") pod \"csi-node-driver-k4dvh\" (UID: \"64a6b624-b75c-46f6-8f62-c89636ac29be\") " pod="calico-system/csi-node-driver-k4dvh" May 17 01:45:00.713799 kubelet[3068]: E0517 01:45:00.713782 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.713922 kubelet[3068]: W0517 01:45:00.713798 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.713922 kubelet[3068]: E0517 01:45:00.713816 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.713922 kubelet[3068]: I0517 01:45:00.713838 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/64a6b624-b75c-46f6-8f62-c89636ac29be-varrun\") pod \"csi-node-driver-k4dvh\" (UID: \"64a6b624-b75c-46f6-8f62-c89636ac29be\") " pod="calico-system/csi-node-driver-k4dvh" May 17 01:45:00.714190 kubelet[3068]: E0517 01:45:00.714168 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.714260 kubelet[3068]: W0517 01:45:00.714192 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.714260 kubelet[3068]: E0517 01:45:00.714215 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.714617 kubelet[3068]: E0517 01:45:00.714564 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.714617 kubelet[3068]: W0517 01:45:00.714584 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.714617 kubelet[3068]: E0517 01:45:00.714609 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.714963 kubelet[3068]: E0517 01:45:00.714919 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.714963 kubelet[3068]: W0517 01:45:00.714940 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.714963 kubelet[3068]: E0517 01:45:00.714962 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.715307 kubelet[3068]: E0517 01:45:00.715268 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.715307 kubelet[3068]: W0517 01:45:00.715297 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.715431 kubelet[3068]: E0517 01:45:00.715338 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.715630 kubelet[3068]: E0517 01:45:00.715599 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.715630 kubelet[3068]: W0517 01:45:00.715614 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.715630 kubelet[3068]: E0517 01:45:00.715629 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.715908 kubelet[3068]: E0517 01:45:00.715889 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.715908 kubelet[3068]: W0517 01:45:00.715903 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.716029 kubelet[3068]: E0517 01:45:00.715917 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.744472 containerd[1819]: time="2025-05-17T01:45:00.744399935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-87d8h,Uid:fc92564f-ae80-4be5-aa41-071d67f21fcf,Namespace:calico-system,Attempt:0,}" May 17 01:45:00.779674 containerd[1819]: time="2025-05-17T01:45:00.779583614Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:45:00.779674 containerd[1819]: time="2025-05-17T01:45:00.779623380Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:45:00.779674 containerd[1819]: time="2025-05-17T01:45:00.779633157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:45:00.779792 containerd[1819]: time="2025-05-17T01:45:00.779714845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:45:00.805521 systemd[1]: Started cri-containerd-beabbf3e7e9527ebcab2909f09af32e17e0a99e15fcee5717e463cd9118b3bff.scope - libcontainer container beabbf3e7e9527ebcab2909f09af32e17e0a99e15fcee5717e463cd9118b3bff. May 17 01:45:00.814389 kubelet[3068]: E0517 01:45:00.814368 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.814389 kubelet[3068]: W0517 01:45:00.814386 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.814490 kubelet[3068]: E0517 01:45:00.814404 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.814627 kubelet[3068]: E0517 01:45:00.814617 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.814676 kubelet[3068]: W0517 01:45:00.814629 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.814676 kubelet[3068]: E0517 01:45:00.814644 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.814856 kubelet[3068]: E0517 01:45:00.814840 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.814900 kubelet[3068]: W0517 01:45:00.814857 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.814900 kubelet[3068]: E0517 01:45:00.814877 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.815063 kubelet[3068]: E0517 01:45:00.815051 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.815097 kubelet[3068]: W0517 01:45:00.815063 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.815097 kubelet[3068]: E0517 01:45:00.815081 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.815309 kubelet[3068]: E0517 01:45:00.815297 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.815309 kubelet[3068]: W0517 01:45:00.815307 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.815392 kubelet[3068]: E0517 01:45:00.815319 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.815499 kubelet[3068]: E0517 01:45:00.815489 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.815499 kubelet[3068]: W0517 01:45:00.815498 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.815561 kubelet[3068]: E0517 01:45:00.815511 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.815731 kubelet[3068]: E0517 01:45:00.815720 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.815791 kubelet[3068]: W0517 01:45:00.815732 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.815791 kubelet[3068]: E0517 01:45:00.815746 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.815932 kubelet[3068]: E0517 01:45:00.815920 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.815990 kubelet[3068]: W0517 01:45:00.815932 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.815990 kubelet[3068]: E0517 01:45:00.815950 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.816161 kubelet[3068]: E0517 01:45:00.816150 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.816200 kubelet[3068]: W0517 01:45:00.816162 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.816200 kubelet[3068]: E0517 01:45:00.816185 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.816320 kubelet[3068]: E0517 01:45:00.816308 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.816320 kubelet[3068]: W0517 01:45:00.816318 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.816395 kubelet[3068]: E0517 01:45:00.816357 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.816457 kubelet[3068]: E0517 01:45:00.816449 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.816489 kubelet[3068]: W0517 01:45:00.816457 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.816489 kubelet[3068]: E0517 01:45:00.816471 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.816592 kubelet[3068]: E0517 01:45:00.816584 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.816624 kubelet[3068]: W0517 01:45:00.816592 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.816624 kubelet[3068]: E0517 01:45:00.816612 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.816715 kubelet[3068]: E0517 01:45:00.816707 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.816750 kubelet[3068]: W0517 01:45:00.816715 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.816750 kubelet[3068]: E0517 01:45:00.816726 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.816874 kubelet[3068]: E0517 01:45:00.816863 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.816874 kubelet[3068]: W0517 01:45:00.816872 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.816965 kubelet[3068]: E0517 01:45:00.816881 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.817074 kubelet[3068]: E0517 01:45:00.817062 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.817127 kubelet[3068]: W0517 01:45:00.817075 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.817127 kubelet[3068]: E0517 01:45:00.817092 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.817334 kubelet[3068]: E0517 01:45:00.817322 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.817373 kubelet[3068]: W0517 01:45:00.817334 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.817373 kubelet[3068]: E0517 01:45:00.817350 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.817537 kubelet[3068]: E0517 01:45:00.817527 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.817575 kubelet[3068]: W0517 01:45:00.817539 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.817575 kubelet[3068]: E0517 01:45:00.817553 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.817763 kubelet[3068]: E0517 01:45:00.817752 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.817763 kubelet[3068]: W0517 01:45:00.817763 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.817831 kubelet[3068]: E0517 01:45:00.817776 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.817921 kubelet[3068]: E0517 01:45:00.817913 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.817951 kubelet[3068]: W0517 01:45:00.817922 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.817951 kubelet[3068]: E0517 01:45:00.817939 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.818069 kubelet[3068]: E0517 01:45:00.818061 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.818100 kubelet[3068]: W0517 01:45:00.818069 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.818100 kubelet[3068]: E0517 01:45:00.818084 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.818242 kubelet[3068]: E0517 01:45:00.818233 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.818285 kubelet[3068]: W0517 01:45:00.818242 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.818285 kubelet[3068]: E0517 01:45:00.818256 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.818421 kubelet[3068]: E0517 01:45:00.818412 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.818457 kubelet[3068]: W0517 01:45:00.818421 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.818457 kubelet[3068]: E0517 01:45:00.818433 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.818621 kubelet[3068]: E0517 01:45:00.818611 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.818658 kubelet[3068]: W0517 01:45:00.818624 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.818658 kubelet[3068]: E0517 01:45:00.818637 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.818844 kubelet[3068]: E0517 01:45:00.818830 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.818880 kubelet[3068]: W0517 01:45:00.818847 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.818880 kubelet[3068]: E0517 01:45:00.818864 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.819113 kubelet[3068]: E0517 01:45:00.819096 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.819113 kubelet[3068]: W0517 01:45:00.819112 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.819221 kubelet[3068]: E0517 01:45:00.819128 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:00.820261 containerd[1819]: time="2025-05-17T01:45:00.820228963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-87d8h,Uid:fc92564f-ae80-4be5-aa41-071d67f21fcf,Namespace:calico-system,Attempt:0,} returns sandbox id \"beabbf3e7e9527ebcab2909f09af32e17e0a99e15fcee5717e463cd9118b3bff\"" May 17 01:45:00.826212 kubelet[3068]: E0517 01:45:00.826188 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:00.826212 kubelet[3068]: W0517 01:45:00.826207 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:00.826379 kubelet[3068]: E0517 01:45:00.826226 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:02.021753 sshd[3610]: PAM: Permission denied for root from 218.92.0.157 May 17 01:45:02.051379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount188182126.mount: Deactivated successfully. May 17 01:45:02.312940 sshd[3781]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root May 17 01:45:02.694959 kubelet[3068]: E0517 01:45:02.694935 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k4dvh" podUID="64a6b624-b75c-46f6-8f62-c89636ac29be" May 17 01:45:02.854463 containerd[1819]: time="2025-05-17T01:45:02.854411033Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:45:02.854656 containerd[1819]: time="2025-05-17T01:45:02.854620344Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=35158669" May 17 01:45:02.855493 containerd[1819]: time="2025-05-17T01:45:02.855165503Z" level=info msg="ImageCreate event name:\"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:45:02.857121 containerd[1819]: time="2025-05-17T01:45:02.857077697Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:45:02.857426 containerd[1819]: time="2025-05-17T01:45:02.857378266Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"35158523\" in 2.500666384s" May 17 01:45:02.857426 containerd[1819]: time="2025-05-17T01:45:02.857398512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 17 01:45:02.857894 containerd[1819]: time="2025-05-17T01:45:02.857880837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 17 01:45:02.860839 containerd[1819]: time="2025-05-17T01:45:02.860737892Z" level=info msg="CreateContainer within sandbox \"8ac35e647b0ca915781d678f8878638a792298036c8abf8e02489b74ad1bf397\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 17 01:45:02.864924 containerd[1819]: time="2025-05-17T01:45:02.864909777Z" level=info msg="CreateContainer within sandbox \"8ac35e647b0ca915781d678f8878638a792298036c8abf8e02489b74ad1bf397\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5cf5e7af5d451c708c1af83f230e8aafe136391fe46e2ef3714ec8e51a71b7ab\"" May 17 01:45:02.865136 containerd[1819]: time="2025-05-17T01:45:02.865126468Z" level=info msg="StartContainer for \"5cf5e7af5d451c708c1af83f230e8aafe136391fe46e2ef3714ec8e51a71b7ab\"" May 17 01:45:02.893540 systemd[1]: Started cri-containerd-5cf5e7af5d451c708c1af83f230e8aafe136391fe46e2ef3714ec8e51a71b7ab.scope - libcontainer container 5cf5e7af5d451c708c1af83f230e8aafe136391fe46e2ef3714ec8e51a71b7ab. May 17 01:45:02.924130 containerd[1819]: time="2025-05-17T01:45:02.924102823Z" level=info msg="StartContainer for \"5cf5e7af5d451c708c1af83f230e8aafe136391fe46e2ef3714ec8e51a71b7ab\" returns successfully" May 17 01:45:03.769172 kubelet[3068]: I0517 01:45:03.769034 3068 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-dbfc59c-95cz5" podStartSLOduration=2.267704362 podStartE2EDuration="4.768981934s" podCreationTimestamp="2025-05-17 01:44:59 +0000 UTC" firstStartedPulling="2025-05-17 01:45:00.35652109 +0000 UTC m=+15.712974764" lastFinishedPulling="2025-05-17 01:45:02.857798667 +0000 UTC m=+18.214252336" observedRunningTime="2025-05-17 01:45:03.768731043 +0000 UTC m=+19.125184823" watchObservedRunningTime="2025-05-17 01:45:03.768981934 +0000 UTC m=+19.125435706" May 17 01:45:03.813882 kubelet[3068]: E0517 01:45:03.813783 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:03.813882 kubelet[3068]: W0517 01:45:03.813836 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:03.813882 kubelet[3068]: E0517 01:45:03.813881 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:03.814609 kubelet[3068]: E0517 01:45:03.814511 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:03.814609 kubelet[3068]: W0517 01:45:03.814549 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:03.814609 kubelet[3068]: E0517 01:45:03.814584 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:03.815246 kubelet[3068]: E0517 01:45:03.815187 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:03.815246 kubelet[3068]: W0517 01:45:03.815225 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:03.815525 kubelet[3068]: E0517 01:45:03.815259 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:03.816040 kubelet[3068]: E0517 01:45:03.815957 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:03.816040 kubelet[3068]: W0517 01:45:03.815992 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:03.816040 kubelet[3068]: E0517 01:45:03.816023 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:03.816691 kubelet[3068]: E0517 01:45:03.816608 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:03.816691 kubelet[3068]: W0517 01:45:03.816637 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:03.816691 kubelet[3068]: E0517 01:45:03.816665 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:03.817220 kubelet[3068]: E0517 01:45:03.817181 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:03.817220 kubelet[3068]: W0517 01:45:03.817210 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:03.817513 kubelet[3068]: E0517 01:45:03.817238 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:03.817889 kubelet[3068]: E0517 01:45:03.817799 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:03.817889 kubelet[3068]: W0517 01:45:03.817840 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:03.817889 kubelet[3068]: E0517 01:45:03.817875 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:03.818540 kubelet[3068]: E0517 01:45:03.818482 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:03.818540 kubelet[3068]: W0517 01:45:03.818522 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:03.818997 kubelet[3068]: E0517 01:45:03.818557 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:03.819221 kubelet[3068]: E0517 01:45:03.819176 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:03.819221 kubelet[3068]: W0517 01:45:03.819220 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:03.819652 kubelet[3068]: E0517 01:45:03.819251 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:03.819832 kubelet[3068]: E0517 01:45:03.819788 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:03.820051 kubelet[3068]: W0517 01:45:03.819839 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:03.820051 kubelet[3068]: E0517 01:45:03.819885 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:03.820659 kubelet[3068]: E0517 01:45:03.820599 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:03.820659 kubelet[3068]: W0517 01:45:03.820637 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:03.820659 kubelet[3068]: E0517 01:45:03.820672 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:03.821330 kubelet[3068]: E0517 01:45:03.821214 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:03.821330 kubelet[3068]: W0517 01:45:03.821241 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:03.821330 kubelet[3068]: E0517 01:45:03.821270 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:03.821856 kubelet[3068]: E0517 01:45:03.821812 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:03.821856 kubelet[3068]: W0517 01:45:03.821851 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:03.822087 kubelet[3068]: E0517 01:45:03.821886 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:03.822569 kubelet[3068]: E0517 01:45:03.822486 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:03.822569 kubelet[3068]: W0517 01:45:03.822525 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:03.822569 kubelet[3068]: E0517 01:45:03.822559 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:03.823206 kubelet[3068]: E0517 01:45:03.823129 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:03.823206 kubelet[3068]: W0517 01:45:03.823161 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:03.823206 kubelet[3068]: E0517 01:45:03.823199 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:03.841859 kubelet[3068]: E0517 01:45:03.841773 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:03.841859 kubelet[3068]: W0517 01:45:03.841810 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:03.841859 kubelet[3068]: E0517 01:45:03.841841 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:03.842522 kubelet[3068]: E0517 01:45:03.842429 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:03.842522 kubelet[3068]: W0517 01:45:03.842477 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:03.842522 kubelet[3068]: E0517 01:45:03.842515 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:03.843251 kubelet[3068]: E0517 01:45:03.843166 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:03.843251 kubelet[3068]: W0517 01:45:03.843204 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:03.843562 kubelet[3068]: E0517 01:45:03.843259 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:03.843904 kubelet[3068]: E0517 01:45:03.843823 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:03.843904 kubelet[3068]: W0517 01:45:03.843864 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:03.843904 kubelet[3068]: E0517 01:45:03.843905 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:03.844509 kubelet[3068]: E0517 01:45:03.844431 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:03.844509 kubelet[3068]: W0517 01:45:03.844461 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:03.844815 kubelet[3068]: E0517 01:45:03.844546 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:03.844933 kubelet[3068]: E0517 01:45:03.844915 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:03.845037 kubelet[3068]: W0517 01:45:03.844941 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:03.845134 kubelet[3068]: E0517 01:45:03.845049 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:03.845507 kubelet[3068]: E0517 01:45:03.845415 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:03.845507 kubelet[3068]: W0517 01:45:03.845443 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:03.845769 kubelet[3068]: E0517 01:45:03.845551 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:03.846076 kubelet[3068]: E0517 01:45:03.845978 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:03.846076 kubelet[3068]: W0517 01:45:03.846017 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:03.846076 kubelet[3068]: E0517 01:45:03.846060 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:03.846887 kubelet[3068]: E0517 01:45:03.846788 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:03.846887 kubelet[3068]: W0517 01:45:03.846826 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:03.847173 kubelet[3068]: E0517 01:45:03.846963 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:03.847474 kubelet[3068]: E0517 01:45:03.847402 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:03.847474 kubelet[3068]: W0517 01:45:03.847432 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:03.847737 kubelet[3068]: E0517 01:45:03.847549 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:03.848073 kubelet[3068]: E0517 01:45:03.847995 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:03.848073 kubelet[3068]: W0517 01:45:03.848033 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:03.848352 kubelet[3068]: E0517 01:45:03.848088 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:03.848720 kubelet[3068]: E0517 01:45:03.848688 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:03.848839 kubelet[3068]: W0517 01:45:03.848720 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:03.848839 kubelet[3068]: E0517 01:45:03.848758 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:03.849438 kubelet[3068]: E0517 01:45:03.849362 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:03.849438 kubelet[3068]: W0517 01:45:03.849392 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:03.849735 kubelet[3068]: E0517 01:45:03.849517 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:03.850047 kubelet[3068]: E0517 01:45:03.849965 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:03.850047 kubelet[3068]: W0517 01:45:03.850003 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:03.850349 kubelet[3068]: E0517 01:45:03.850095 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:03.850699 kubelet[3068]: E0517 01:45:03.850603 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:03.850699 kubelet[3068]: W0517 01:45:03.850641 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:03.850979 kubelet[3068]: E0517 01:45:03.850775 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:03.851256 kubelet[3068]: E0517 01:45:03.851220 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:03.851256 kubelet[3068]: W0517 01:45:03.851251 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:03.851503 kubelet[3068]: E0517 01:45:03.851315 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:03.852028 kubelet[3068]: E0517 01:45:03.851936 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:03.852028 kubelet[3068]: W0517 01:45:03.851973 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:03.852028 kubelet[3068]: E0517 01:45:03.852008 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:03.853135 kubelet[3068]: E0517 01:45:03.853040 3068 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:45:03.853135 kubelet[3068]: W0517 01:45:03.853078 3068 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:45:03.853135 kubelet[3068]: E0517 01:45:03.853113 3068 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:45:04.213914 sshd[3610]: PAM: Permission denied for root from 218.92.0.157 May 17 01:45:04.506158 sshd[3878]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root May 17 01:45:04.695900 kubelet[3068]: E0517 01:45:04.695842 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k4dvh" podUID="64a6b624-b75c-46f6-8f62-c89636ac29be" May 17 01:45:04.748578 kubelet[3068]: I0517 01:45:04.748562 3068 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 01:45:04.766641 containerd[1819]: time="2025-05-17T01:45:04.766555883Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:45:04.766827 containerd[1819]: time="2025-05-17T01:45:04.766714962Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4441619" May 17 01:45:04.767071 containerd[1819]: time="2025-05-17T01:45:04.767035729Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:45:04.768149 containerd[1819]: time="2025-05-17T01:45:04.768106802Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:45:04.768538 containerd[1819]: time="2025-05-17T01:45:04.768497089Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 1.910599619s" May 17 01:45:04.768538 containerd[1819]: time="2025-05-17T01:45:04.768513066Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 17 01:45:04.769994 containerd[1819]: time="2025-05-17T01:45:04.769980664Z" level=info msg="CreateContainer within sandbox \"beabbf3e7e9527ebcab2909f09af32e17e0a99e15fcee5717e463cd9118b3bff\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 17 01:45:04.774468 containerd[1819]: time="2025-05-17T01:45:04.774423254Z" level=info msg="CreateContainer within sandbox \"beabbf3e7e9527ebcab2909f09af32e17e0a99e15fcee5717e463cd9118b3bff\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4650abfb8d66d7b5e80feb7dd5de54f38ba1cb4627d9f9b5d4822dc97976ff86\"" May 17 01:45:04.774642 containerd[1819]: time="2025-05-17T01:45:04.774626319Z" level=info msg="StartContainer for \"4650abfb8d66d7b5e80feb7dd5de54f38ba1cb4627d9f9b5d4822dc97976ff86\"" May 17 01:45:04.799583 systemd[1]: Started cri-containerd-4650abfb8d66d7b5e80feb7dd5de54f38ba1cb4627d9f9b5d4822dc97976ff86.scope - libcontainer container 4650abfb8d66d7b5e80feb7dd5de54f38ba1cb4627d9f9b5d4822dc97976ff86. May 17 01:45:04.812217 containerd[1819]: time="2025-05-17T01:45:04.812147608Z" level=info msg="StartContainer for \"4650abfb8d66d7b5e80feb7dd5de54f38ba1cb4627d9f9b5d4822dc97976ff86\" returns successfully" May 17 01:45:04.817101 systemd[1]: cri-containerd-4650abfb8d66d7b5e80feb7dd5de54f38ba1cb4627d9f9b5d4822dc97976ff86.scope: Deactivated successfully. May 17 01:45:04.830016 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4650abfb8d66d7b5e80feb7dd5de54f38ba1cb4627d9f9b5d4822dc97976ff86-rootfs.mount: Deactivated successfully. May 17 01:45:05.259166 containerd[1819]: time="2025-05-17T01:45:05.259071858Z" level=info msg="shim disconnected" id=4650abfb8d66d7b5e80feb7dd5de54f38ba1cb4627d9f9b5d4822dc97976ff86 namespace=k8s.io May 17 01:45:05.259166 containerd[1819]: time="2025-05-17T01:45:05.259156836Z" level=warning msg="cleaning up after shim disconnected" id=4650abfb8d66d7b5e80feb7dd5de54f38ba1cb4627d9f9b5d4822dc97976ff86 namespace=k8s.io May 17 01:45:05.259166 containerd[1819]: time="2025-05-17T01:45:05.259162268Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 01:45:05.757203 containerd[1819]: time="2025-05-17T01:45:05.757115324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 17 01:45:06.696252 kubelet[3068]: E0517 01:45:06.696115 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k4dvh" podUID="64a6b624-b75c-46f6-8f62-c89636ac29be" May 17 01:45:07.015412 sshd[3610]: PAM: Permission denied for root from 218.92.0.157 May 17 01:45:07.035574 systemd[1]: Started sshd@10-145.40.90.165:22-94.102.4.12:36708.service - OpenSSH per-connection server daemon (94.102.4.12:36708). May 17 01:45:07.160839 sshd[3610]: Received disconnect from 218.92.0.157 port 59659:11: [preauth] May 17 01:45:07.160839 sshd[3610]: Disconnected from authenticating user root 218.92.0.157 port 59659 [preauth] May 17 01:45:07.164407 systemd[1]: sshd@9-145.40.90.165:22-218.92.0.157:59659.service: Deactivated successfully. May 17 01:45:08.198205 sshd[3952]: Invalid user linda from 94.102.4.12 port 36708 May 17 01:45:08.413814 sshd[3952]: Received disconnect from 94.102.4.12 port 36708:11: Bye Bye [preauth] May 17 01:45:08.413814 sshd[3952]: Disconnected from invalid user linda 94.102.4.12 port 36708 [preauth] May 17 01:45:08.416440 systemd[1]: sshd@10-145.40.90.165:22-94.102.4.12:36708.service: Deactivated successfully. May 17 01:45:08.695431 kubelet[3068]: E0517 01:45:08.695402 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k4dvh" podUID="64a6b624-b75c-46f6-8f62-c89636ac29be" May 17 01:45:09.028776 containerd[1819]: time="2025-05-17T01:45:09.028688275Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:45:09.028958 containerd[1819]: time="2025-05-17T01:45:09.028828145Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 17 01:45:09.029210 containerd[1819]: time="2025-05-17T01:45:09.029174493Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:45:09.030322 containerd[1819]: time="2025-05-17T01:45:09.030275537Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:45:09.030762 containerd[1819]: time="2025-05-17T01:45:09.030719493Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 3.273530486s" May 17 01:45:09.030762 containerd[1819]: time="2025-05-17T01:45:09.030736492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 17 01:45:09.031744 containerd[1819]: time="2025-05-17T01:45:09.031730593Z" level=info msg="CreateContainer within sandbox \"beabbf3e7e9527ebcab2909f09af32e17e0a99e15fcee5717e463cd9118b3bff\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 17 01:45:09.036317 containerd[1819]: time="2025-05-17T01:45:09.036294426Z" level=info msg="CreateContainer within sandbox \"beabbf3e7e9527ebcab2909f09af32e17e0a99e15fcee5717e463cd9118b3bff\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"671a92373fc493a098f982b98b5b7e4f757f255c24e66d1c10ef59d8fc5903a8\"" May 17 01:45:09.036450 containerd[1819]: time="2025-05-17T01:45:09.036438353Z" level=info msg="StartContainer for \"671a92373fc493a098f982b98b5b7e4f757f255c24e66d1c10ef59d8fc5903a8\"" May 17 01:45:09.059466 systemd[1]: Started cri-containerd-671a92373fc493a098f982b98b5b7e4f757f255c24e66d1c10ef59d8fc5903a8.scope - libcontainer container 671a92373fc493a098f982b98b5b7e4f757f255c24e66d1c10ef59d8fc5903a8. May 17 01:45:09.071923 containerd[1819]: time="2025-05-17T01:45:09.071871661Z" level=info msg="StartContainer for \"671a92373fc493a098f982b98b5b7e4f757f255c24e66d1c10ef59d8fc5903a8\" returns successfully" May 17 01:45:09.620116 containerd[1819]: time="2025-05-17T01:45:09.620077943Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 01:45:09.621075 systemd[1]: cri-containerd-671a92373fc493a098f982b98b5b7e4f757f255c24e66d1c10ef59d8fc5903a8.scope: Deactivated successfully. May 17 01:45:09.630642 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-671a92373fc493a098f982b98b5b7e4f757f255c24e66d1c10ef59d8fc5903a8-rootfs.mount: Deactivated successfully. May 17 01:45:09.660211 kubelet[3068]: I0517 01:45:09.660114 3068 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 17 01:45:09.712923 systemd[1]: Created slice kubepods-burstable-pod3af411a3_09ba_4381_bce5_19753bf3d671.slice - libcontainer container kubepods-burstable-pod3af411a3_09ba_4381_bce5_19753bf3d671.slice. May 17 01:45:09.720149 systemd[1]: Created slice kubepods-burstable-pod22e0dd7b_458b_49cc_aec7_e6a2e03d9deb.slice - libcontainer container kubepods-burstable-pod22e0dd7b_458b_49cc_aec7_e6a2e03d9deb.slice. May 17 01:45:09.726980 systemd[1]: Created slice kubepods-besteffort-podb0cc1622_d561_4a8d_9dde_c3b01983d270.slice - libcontainer container kubepods-besteffort-podb0cc1622_d561_4a8d_9dde_c3b01983d270.slice. May 17 01:45:09.733062 systemd[1]: Created slice kubepods-besteffort-poda83cb314_94c7_48be_9000_43244ee2be0f.slice - libcontainer container kubepods-besteffort-poda83cb314_94c7_48be_9000_43244ee2be0f.slice. May 17 01:45:09.739257 systemd[1]: Created slice kubepods-besteffort-pod1fc7e728_2787_4a01_a1fd_dfaad847d529.slice - libcontainer container kubepods-besteffort-pod1fc7e728_2787_4a01_a1fd_dfaad847d529.slice. May 17 01:45:09.742772 systemd[1]: Created slice kubepods-besteffort-pod4e9e9b39_38e8_4c49_8ba5_42ffe70ae6b8.slice - libcontainer container kubepods-besteffort-pod4e9e9b39_38e8_4c49_8ba5_42ffe70ae6b8.slice. May 17 01:45:09.746425 systemd[1]: Created slice kubepods-besteffort-pod0bdfd9d7_2db3_41ee_a583_e9b26216debf.slice - libcontainer container kubepods-besteffort-pod0bdfd9d7_2db3_41ee_a583_e9b26216debf.slice. May 17 01:45:09.786339 kubelet[3068]: I0517 01:45:09.786315 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8-config\") pod \"goldmane-78d55f7ddc-zwf9c\" (UID: \"4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8\") " pod="calico-system/goldmane-78d55f7ddc-zwf9c" May 17 01:45:09.786339 kubelet[3068]: I0517 01:45:09.786344 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8-goldmane-ca-bundle\") pod \"goldmane-78d55f7ddc-zwf9c\" (UID: \"4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8\") " pod="calico-system/goldmane-78d55f7ddc-zwf9c" May 17 01:45:09.796698 kubelet[3068]: I0517 01:45:09.786359 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22e0dd7b-458b-49cc-aec7-e6a2e03d9deb-config-volume\") pod \"coredns-668d6bf9bc-574w6\" (UID: \"22e0dd7b-458b-49cc-aec7-e6a2e03d9deb\") " pod="kube-system/coredns-668d6bf9bc-574w6" May 17 01:45:09.796698 kubelet[3068]: I0517 01:45:09.786374 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0cc1622-d561-4a8d-9dde-c3b01983d270-tigera-ca-bundle\") pod \"calico-kube-controllers-86b6b5cd9b-tjktk\" (UID: \"b0cc1622-d561-4a8d-9dde-c3b01983d270\") " pod="calico-system/calico-kube-controllers-86b6b5cd9b-tjktk" May 17 01:45:09.796698 kubelet[3068]: I0517 01:45:09.786390 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z77v6\" (UniqueName: \"kubernetes.io/projected/0bdfd9d7-2db3-41ee-a583-e9b26216debf-kube-api-access-z77v6\") pod \"whisker-7b6c6ddd44-8lgrj\" (UID: \"0bdfd9d7-2db3-41ee-a583-e9b26216debf\") " pod="calico-system/whisker-7b6c6ddd44-8lgrj" May 17 01:45:09.796698 kubelet[3068]: I0517 01:45:09.786428 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfh75\" (UniqueName: \"kubernetes.io/projected/3af411a3-09ba-4381-bce5-19753bf3d671-kube-api-access-hfh75\") pod \"coredns-668d6bf9bc-p8kz8\" (UID: \"3af411a3-09ba-4381-bce5-19753bf3d671\") " pod="kube-system/coredns-668d6bf9bc-p8kz8" May 17 01:45:09.796698 kubelet[3068]: I0517 01:45:09.786461 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8-goldmane-key-pair\") pod \"goldmane-78d55f7ddc-zwf9c\" (UID: \"4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8\") " pod="calico-system/goldmane-78d55f7ddc-zwf9c" May 17 01:45:09.796856 kubelet[3068]: I0517 01:45:09.786480 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3af411a3-09ba-4381-bce5-19753bf3d671-config-volume\") pod \"coredns-668d6bf9bc-p8kz8\" (UID: \"3af411a3-09ba-4381-bce5-19753bf3d671\") " pod="kube-system/coredns-668d6bf9bc-p8kz8" May 17 01:45:09.796856 kubelet[3068]: I0517 01:45:09.786497 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a83cb314-94c7-48be-9000-43244ee2be0f-calico-apiserver-certs\") pod \"calico-apiserver-bf55ffd57-6x6sv\" (UID: \"a83cb314-94c7-48be-9000-43244ee2be0f\") " pod="calico-apiserver/calico-apiserver-bf55ffd57-6x6sv" May 17 01:45:09.796856 kubelet[3068]: I0517 01:45:09.786512 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nlrp\" (UniqueName: \"kubernetes.io/projected/22e0dd7b-458b-49cc-aec7-e6a2e03d9deb-kube-api-access-4nlrp\") pod \"coredns-668d6bf9bc-574w6\" (UID: \"22e0dd7b-458b-49cc-aec7-e6a2e03d9deb\") " pod="kube-system/coredns-668d6bf9bc-574w6" May 17 01:45:09.796856 kubelet[3068]: I0517 01:45:09.786553 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0bdfd9d7-2db3-41ee-a583-e9b26216debf-whisker-backend-key-pair\") pod \"whisker-7b6c6ddd44-8lgrj\" (UID: \"0bdfd9d7-2db3-41ee-a583-e9b26216debf\") " pod="calico-system/whisker-7b6c6ddd44-8lgrj" May 17 01:45:09.796856 kubelet[3068]: I0517 01:45:09.786579 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99gvs\" (UniqueName: \"kubernetes.io/projected/a83cb314-94c7-48be-9000-43244ee2be0f-kube-api-access-99gvs\") pod \"calico-apiserver-bf55ffd57-6x6sv\" (UID: \"a83cb314-94c7-48be-9000-43244ee2be0f\") " pod="calico-apiserver/calico-apiserver-bf55ffd57-6x6sv" May 17 01:45:09.797004 kubelet[3068]: I0517 01:45:09.786592 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1fc7e728-2787-4a01-a1fd-dfaad847d529-calico-apiserver-certs\") pod \"calico-apiserver-bf55ffd57-tw6s9\" (UID: \"1fc7e728-2787-4a01-a1fd-dfaad847d529\") " pod="calico-apiserver/calico-apiserver-bf55ffd57-tw6s9" May 17 01:45:09.797004 kubelet[3068]: I0517 01:45:09.786604 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5wdt\" (UniqueName: \"kubernetes.io/projected/1fc7e728-2787-4a01-a1fd-dfaad847d529-kube-api-access-b5wdt\") pod \"calico-apiserver-bf55ffd57-tw6s9\" (UID: \"1fc7e728-2787-4a01-a1fd-dfaad847d529\") " pod="calico-apiserver/calico-apiserver-bf55ffd57-tw6s9" May 17 01:45:09.797004 kubelet[3068]: I0517 01:45:09.786616 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0bdfd9d7-2db3-41ee-a583-e9b26216debf-whisker-ca-bundle\") pod \"whisker-7b6c6ddd44-8lgrj\" (UID: \"0bdfd9d7-2db3-41ee-a583-e9b26216debf\") " pod="calico-system/whisker-7b6c6ddd44-8lgrj" May 17 01:45:09.797004 kubelet[3068]: I0517 01:45:09.786630 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrqcc\" (UniqueName: \"kubernetes.io/projected/b0cc1622-d561-4a8d-9dde-c3b01983d270-kube-api-access-wrqcc\") pod \"calico-kube-controllers-86b6b5cd9b-tjktk\" (UID: \"b0cc1622-d561-4a8d-9dde-c3b01983d270\") " pod="calico-system/calico-kube-controllers-86b6b5cd9b-tjktk" May 17 01:45:09.797004 kubelet[3068]: I0517 01:45:09.786643 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z97mm\" (UniqueName: \"kubernetes.io/projected/4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8-kube-api-access-z97mm\") pod \"goldmane-78d55f7ddc-zwf9c\" (UID: \"4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8\") " pod="calico-system/goldmane-78d55f7ddc-zwf9c" May 17 01:45:09.970265 containerd[1819]: time="2025-05-17T01:45:09.970196909Z" level=info msg="shim disconnected" id=671a92373fc493a098f982b98b5b7e4f757f255c24e66d1c10ef59d8fc5903a8 namespace=k8s.io May 17 01:45:09.970265 containerd[1819]: time="2025-05-17T01:45:09.970230262Z" level=warning msg="cleaning up after shim disconnected" id=671a92373fc493a098f982b98b5b7e4f757f255c24e66d1c10ef59d8fc5903a8 namespace=k8s.io May 17 01:45:09.970265 containerd[1819]: time="2025-05-17T01:45:09.970236215Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 01:45:10.018039 containerd[1819]: time="2025-05-17T01:45:10.018002625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p8kz8,Uid:3af411a3-09ba-4381-bce5-19753bf3d671,Namespace:kube-system,Attempt:0,}" May 17 01:45:10.023455 containerd[1819]: time="2025-05-17T01:45:10.023417710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-574w6,Uid:22e0dd7b-458b-49cc-aec7-e6a2e03d9deb,Namespace:kube-system,Attempt:0,}" May 17 01:45:10.030230 containerd[1819]: time="2025-05-17T01:45:10.030181245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86b6b5cd9b-tjktk,Uid:b0cc1622-d561-4a8d-9dde-c3b01983d270,Namespace:calico-system,Attempt:0,}" May 17 01:45:10.035847 containerd[1819]: time="2025-05-17T01:45:10.035823340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bf55ffd57-6x6sv,Uid:a83cb314-94c7-48be-9000-43244ee2be0f,Namespace:calico-apiserver,Attempt:0,}" May 17 01:45:10.041811 containerd[1819]: time="2025-05-17T01:45:10.041486140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bf55ffd57-tw6s9,Uid:1fc7e728-2787-4a01-a1fd-dfaad847d529,Namespace:calico-apiserver,Attempt:0,}" May 17 01:45:10.045025 containerd[1819]: time="2025-05-17T01:45:10.044998692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-zwf9c,Uid:4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8,Namespace:calico-system,Attempt:0,}" May 17 01:45:10.046984 containerd[1819]: time="2025-05-17T01:45:10.046956790Z" level=error msg="Failed to destroy network for sandbox \"3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:45:10.047158 containerd[1819]: time="2025-05-17T01:45:10.047136685Z" level=error msg="encountered an error cleaning up failed sandbox \"3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:45:10.047212 containerd[1819]: time="2025-05-17T01:45:10.047172520Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p8kz8,Uid:3af411a3-09ba-4381-bce5-19753bf3d671,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:45:10.047350 kubelet[3068]: E0517 01:45:10.047316 3068 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:45:10.047556 kubelet[3068]: E0517 01:45:10.047387 3068 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-p8kz8" May 17 01:45:10.047556 kubelet[3068]: E0517 01:45:10.047405 3068 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-p8kz8" May 17 01:45:10.047556 kubelet[3068]: E0517 01:45:10.047442 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-p8kz8_kube-system(3af411a3-09ba-4381-bce5-19753bf3d671)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-p8kz8_kube-system(3af411a3-09ba-4381-bce5-19753bf3d671)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-p8kz8" podUID="3af411a3-09ba-4381-bce5-19753bf3d671" May 17 01:45:10.048048 containerd[1819]: time="2025-05-17T01:45:10.048024984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b6c6ddd44-8lgrj,Uid:0bdfd9d7-2db3-41ee-a583-e9b26216debf,Namespace:calico-system,Attempt:0,}" May 17 01:45:10.051400 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2-shm.mount: Deactivated successfully. May 17 01:45:10.058879 containerd[1819]: time="2025-05-17T01:45:10.058841769Z" level=error msg="Failed to destroy network for sandbox \"96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:45:10.059112 containerd[1819]: time="2025-05-17T01:45:10.059087080Z" level=error msg="encountered an error cleaning up failed sandbox \"96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:45:10.059158 containerd[1819]: time="2025-05-17T01:45:10.059131713Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-574w6,Uid:22e0dd7b-458b-49cc-aec7-e6a2e03d9deb,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:45:10.059339 kubelet[3068]: E0517 01:45:10.059305 3068 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:45:10.059384 kubelet[3068]: E0517 01:45:10.059368 3068 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-574w6" May 17 01:45:10.059408 kubelet[3068]: E0517 01:45:10.059391 3068 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-574w6" May 17 01:45:10.059461 kubelet[3068]: E0517 01:45:10.059438 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-574w6_kube-system(22e0dd7b-458b-49cc-aec7-e6a2e03d9deb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-574w6_kube-system(22e0dd7b-458b-49cc-aec7-e6a2e03d9deb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-574w6" podUID="22e0dd7b-458b-49cc-aec7-e6a2e03d9deb" May 17 01:45:10.062045 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a-shm.mount: Deactivated successfully. May 17 01:45:10.064539 containerd[1819]: time="2025-05-17T01:45:10.064512157Z" level=error msg="Failed to destroy network for sandbox \"51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:45:10.064737 containerd[1819]: time="2025-05-17T01:45:10.064710272Z" level=error msg="encountered an error cleaning up failed sandbox \"51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:45:10.064795 containerd[1819]: time="2025-05-17T01:45:10.064752310Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86b6b5cd9b-tjktk,Uid:b0cc1622-d561-4a8d-9dde-c3b01983d270,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:45:10.064966 kubelet[3068]: E0517 01:45:10.064935 3068 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:45:10.065015 kubelet[3068]: E0517 01:45:10.064986 3068 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86b6b5cd9b-tjktk" May 17 01:45:10.065015 kubelet[3068]: E0517 01:45:10.065005 3068 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86b6b5cd9b-tjktk" May 17 01:45:10.065067 kubelet[3068]: E0517 01:45:10.065032 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86b6b5cd9b-tjktk_calico-system(b0cc1622-d561-4a8d-9dde-c3b01983d270)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86b6b5cd9b-tjktk_calico-system(b0cc1622-d561-4a8d-9dde-c3b01983d270)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86b6b5cd9b-tjktk" podUID="b0cc1622-d561-4a8d-9dde-c3b01983d270" May 17 01:45:10.067244 containerd[1819]: time="2025-05-17T01:45:10.067208281Z" level=error msg="Failed to destroy network for sandbox \"df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:45:10.067483 containerd[1819]: time="2025-05-17T01:45:10.067461927Z" level=error msg="encountered an error cleaning up failed sandbox \"df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:45:10.067534 containerd[1819]: time="2025-05-17T01:45:10.067503933Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bf55ffd57-6x6sv,Uid:a83cb314-94c7-48be-9000-43244ee2be0f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:45:10.067692 kubelet[3068]: E0517 01:45:10.067664 3068 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:45:10.067745 kubelet[3068]: E0517 01:45:10.067706 3068 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bf55ffd57-6x6sv" May 17 01:45:10.067745 kubelet[3068]: E0517 01:45:10.067721 3068 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bf55ffd57-6x6sv" May 17 01:45:10.067785 kubelet[3068]: E0517 01:45:10.067747 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bf55ffd57-6x6sv_calico-apiserver(a83cb314-94c7-48be-9000-43244ee2be0f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bf55ffd57-6x6sv_calico-apiserver(a83cb314-94c7-48be-9000-43244ee2be0f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bf55ffd57-6x6sv" podUID="a83cb314-94c7-48be-9000-43244ee2be0f" May 17 01:45:10.073561 containerd[1819]: time="2025-05-17T01:45:10.073526348Z" level=error msg="Failed to destroy network for sandbox \"2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:45:10.073739 containerd[1819]: time="2025-05-17T01:45:10.073724911Z" level=error msg="encountered an error cleaning up failed sandbox \"2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:45:10.073766 containerd[1819]: time="2025-05-17T01:45:10.073755410Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bf55ffd57-tw6s9,Uid:1fc7e728-2787-4a01-a1fd-dfaad847d529,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:45:10.073900 kubelet[3068]: E0517 01:45:10.073881 3068 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:45:10.073931 kubelet[3068]: E0517 01:45:10.073919 3068 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bf55ffd57-tw6s9" May 17 01:45:10.073955 kubelet[3068]: E0517 01:45:10.073932 3068 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bf55ffd57-tw6s9" May 17 01:45:10.073980 kubelet[3068]: E0517 01:45:10.073956 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bf55ffd57-tw6s9_calico-apiserver(1fc7e728-2787-4a01-a1fd-dfaad847d529)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bf55ffd57-tw6s9_calico-apiserver(1fc7e728-2787-4a01-a1fd-dfaad847d529)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bf55ffd57-tw6s9" podUID="1fc7e728-2787-4a01-a1fd-dfaad847d529" May 17 01:45:10.078845 containerd[1819]: time="2025-05-17T01:45:10.078810752Z" level=error msg="Failed to destroy network for sandbox \"ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:45:10.079049 containerd[1819]: time="2025-05-17T01:45:10.079011221Z" level=error msg="encountered an error cleaning up failed sandbox \"ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:45:10.079079 containerd[1819]: time="2025-05-17T01:45:10.079044846Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-zwf9c,Uid:4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:45:10.079152 kubelet[3068]: E0517 01:45:10.079134 3068 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:45:10.079188 kubelet[3068]: E0517 01:45:10.079164 3068 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-zwf9c" May 17 01:45:10.079188 kubelet[3068]: E0517 01:45:10.079178 3068 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-zwf9c" May 17 01:45:10.079228 kubelet[3068]: E0517 01:45:10.079203 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-78d55f7ddc-zwf9c_calico-system(4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-78d55f7ddc-zwf9c_calico-system(4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-zwf9c" podUID="4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8" May 17 01:45:10.083614 containerd[1819]: time="2025-05-17T01:45:10.083566714Z" level=error msg="Failed to destroy network for sandbox \"94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:45:10.083767 containerd[1819]: time="2025-05-17T01:45:10.083722633Z" level=error msg="encountered an error cleaning up failed sandbox \"94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:45:10.083767 containerd[1819]: time="2025-05-17T01:45:10.083746114Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b6c6ddd44-8lgrj,Uid:0bdfd9d7-2db3-41ee-a583-e9b26216debf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:45:10.083895 kubelet[3068]: E0517 01:45:10.083856 3068 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:45:10.083895 kubelet[3068]: E0517 01:45:10.083882 3068 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b6c6ddd44-8lgrj" May 17 01:45:10.083944 kubelet[3068]: E0517 01:45:10.083896 3068 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b6c6ddd44-8lgrj" May 17 01:45:10.083944 kubelet[3068]: E0517 01:45:10.083920 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7b6c6ddd44-8lgrj_calico-system(0bdfd9d7-2db3-41ee-a583-e9b26216debf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7b6c6ddd44-8lgrj_calico-system(0bdfd9d7-2db3-41ee-a583-e9b26216debf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7b6c6ddd44-8lgrj" podUID="0bdfd9d7-2db3-41ee-a583-e9b26216debf" May 17 01:45:10.702946 systemd[1]: Created slice kubepods-besteffort-pod64a6b624_b75c_46f6_8f62_c89636ac29be.slice - libcontainer container kubepods-besteffort-pod64a6b624_b75c_46f6_8f62_c89636ac29be.slice. May 17 01:45:10.706656 containerd[1819]: time="2025-05-17T01:45:10.706567509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k4dvh,Uid:64a6b624-b75c-46f6-8f62-c89636ac29be,Namespace:calico-system,Attempt:0,}" May 17 01:45:10.733836 containerd[1819]: time="2025-05-17T01:45:10.733805747Z" level=error msg="Failed to destroy network for sandbox \"8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:45:10.733995 containerd[1819]: time="2025-05-17T01:45:10.733982557Z" level=error msg="encountered an error cleaning up failed sandbox \"8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:45:10.734019 containerd[1819]: time="2025-05-17T01:45:10.734010071Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k4dvh,Uid:64a6b624-b75c-46f6-8f62-c89636ac29be,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:45:10.734181 kubelet[3068]: E0517 01:45:10.734146 3068 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:45:10.734209 kubelet[3068]: E0517 01:45:10.734198 3068 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k4dvh" May 17 01:45:10.734231 kubelet[3068]: E0517 01:45:10.734211 3068 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k4dvh" May 17 01:45:10.734252 kubelet[3068]: E0517 01:45:10.734236 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-k4dvh_calico-system(64a6b624-b75c-46f6-8f62-c89636ac29be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-k4dvh_calico-system(64a6b624-b75c-46f6-8f62-c89636ac29be)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-k4dvh" podUID="64a6b624-b75c-46f6-8f62-c89636ac29be" May 17 01:45:10.768389 kubelet[3068]: I0517 01:45:10.768358 3068 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6" May 17 01:45:10.768533 containerd[1819]: time="2025-05-17T01:45:10.768419925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 17 01:45:10.769024 containerd[1819]: time="2025-05-17T01:45:10.769010123Z" level=info msg="StopPodSandbox for \"8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6\"" May 17 01:45:10.769063 kubelet[3068]: I0517 01:45:10.769027 3068 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a" May 17 01:45:10.769115 containerd[1819]: time="2025-05-17T01:45:10.769103913Z" level=info msg="Ensure that sandbox 8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6 in task-service has been cleanup successfully" May 17 01:45:10.769257 containerd[1819]: time="2025-05-17T01:45:10.769247840Z" level=info msg="StopPodSandbox for \"96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a\"" May 17 01:45:10.769342 containerd[1819]: time="2025-05-17T01:45:10.769331165Z" level=info msg="Ensure that sandbox 96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a in task-service has been cleanup successfully" May 17 01:45:10.769581 kubelet[3068]: I0517 01:45:10.769573 3068 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61" May 17 01:45:10.769780 containerd[1819]: time="2025-05-17T01:45:10.769766830Z" level=info msg="StopPodSandbox for \"df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61\"" May 17 01:45:10.769851 containerd[1819]: time="2025-05-17T01:45:10.769842617Z" level=info msg="Ensure that sandbox df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61 in task-service has been cleanup successfully" May 17 01:45:10.770031 kubelet[3068]: I0517 01:45:10.770021 3068 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2" May 17 01:45:10.770312 containerd[1819]: time="2025-05-17T01:45:10.770298774Z" level=info msg="StopPodSandbox for \"3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2\"" May 17 01:45:10.770407 containerd[1819]: time="2025-05-17T01:45:10.770396577Z" level=info msg="Ensure that sandbox 3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2 in task-service has been cleanup successfully" May 17 01:45:10.772307 kubelet[3068]: I0517 01:45:10.772283 3068 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3" May 17 01:45:10.772735 containerd[1819]: time="2025-05-17T01:45:10.772700908Z" level=info msg="StopPodSandbox for \"2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3\"" May 17 01:45:10.772874 containerd[1819]: time="2025-05-17T01:45:10.772859637Z" level=info msg="Ensure that sandbox 2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3 in task-service has been cleanup successfully" May 17 01:45:10.773021 kubelet[3068]: I0517 01:45:10.773007 3068 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c" May 17 01:45:10.773324 containerd[1819]: time="2025-05-17T01:45:10.773304221Z" level=info msg="StopPodSandbox for \"51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c\"" May 17 01:45:10.773455 containerd[1819]: time="2025-05-17T01:45:10.773442818Z" level=info msg="Ensure that sandbox 51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c in task-service has been cleanup successfully" May 17 01:45:10.773640 kubelet[3068]: I0517 01:45:10.773621 3068 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4" May 17 01:45:10.774071 containerd[1819]: time="2025-05-17T01:45:10.774044974Z" level=info msg="StopPodSandbox for \"94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4\"" May 17 01:45:10.774205 containerd[1819]: time="2025-05-17T01:45:10.774189855Z" level=info msg="Ensure that sandbox 94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4 in task-service has been cleanup successfully" May 17 01:45:10.774330 kubelet[3068]: I0517 01:45:10.774315 3068 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f" May 17 01:45:10.774651 containerd[1819]: time="2025-05-17T01:45:10.774624829Z" level=info msg="StopPodSandbox for \"ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f\"" May 17 01:45:10.774864 containerd[1819]: time="2025-05-17T01:45:10.774850363Z" level=info msg="Ensure that sandbox ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f in task-service has been cleanup successfully" May 17 01:45:10.786819 containerd[1819]: time="2025-05-17T01:45:10.786704958Z" level=error msg="StopPodSandbox for \"8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6\" failed" error="failed to destroy network for sandbox \"8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:45:10.787074 kubelet[3068]: E0517 01:45:10.787043 3068 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6" May 17 01:45:10.787405 kubelet[3068]: E0517 01:45:10.787105 3068 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6"} May 17 01:45:10.787405 kubelet[3068]: E0517 01:45:10.787160 3068 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"64a6b624-b75c-46f6-8f62-c89636ac29be\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 01:45:10.787405 kubelet[3068]: E0517 01:45:10.787185 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"64a6b624-b75c-46f6-8f62-c89636ac29be\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-k4dvh" podUID="64a6b624-b75c-46f6-8f62-c89636ac29be" May 17 01:45:10.788324 containerd[1819]: time="2025-05-17T01:45:10.788211157Z" level=error msg="StopPodSandbox for \"df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61\" failed" error="failed to destroy network for sandbox \"df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:45:10.788324 containerd[1819]: time="2025-05-17T01:45:10.788211788Z" level=error msg="StopPodSandbox for \"96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a\" failed" error="failed to destroy network for sandbox \"96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:45:10.788389 kubelet[3068]: E0517 01:45:10.788372 3068 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a" May 17 01:45:10.788412 kubelet[3068]: E0517 01:45:10.788399 3068 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a"} May 17 01:45:10.788442 kubelet[3068]: E0517 01:45:10.788428 3068 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"22e0dd7b-458b-49cc-aec7-e6a2e03d9deb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 01:45:10.788477 kubelet[3068]: E0517 01:45:10.788443 3068 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61" May 17 01:45:10.788477 kubelet[3068]: E0517 01:45:10.788452 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"22e0dd7b-458b-49cc-aec7-e6a2e03d9deb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-574w6" podUID="22e0dd7b-458b-49cc-aec7-e6a2e03d9deb" May 17 01:45:10.788529 kubelet[3068]: E0517 01:45:10.788463 3068 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61"} May 17 01:45:10.788529 kubelet[3068]: E0517 01:45:10.788498 3068 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a83cb314-94c7-48be-9000-43244ee2be0f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 01:45:10.788529 kubelet[3068]: E0517 01:45:10.788508 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a83cb314-94c7-48be-9000-43244ee2be0f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bf55ffd57-6x6sv" podUID="a83cb314-94c7-48be-9000-43244ee2be0f" May 17 01:45:10.788789 containerd[1819]: time="2025-05-17T01:45:10.788767051Z" level=error msg="StopPodSandbox for \"51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c\" failed" error="failed to destroy network for sandbox \"51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:45:10.788961 kubelet[3068]: E0517 01:45:10.788897 3068 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c" May 17 01:45:10.788961 kubelet[3068]: E0517 01:45:10.788923 3068 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c"} May 17 01:45:10.788961 kubelet[3068]: E0517 01:45:10.788937 3068 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b0cc1622-d561-4a8d-9dde-c3b01983d270\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 01:45:10.788961 kubelet[3068]: E0517 01:45:10.788948 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b0cc1622-d561-4a8d-9dde-c3b01983d270\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86b6b5cd9b-tjktk" podUID="b0cc1622-d561-4a8d-9dde-c3b01983d270" May 17 01:45:10.789102 containerd[1819]: time="2025-05-17T01:45:10.789034557Z" level=error msg="StopPodSandbox for \"3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2\" failed" error="failed to destroy network for sandbox \"3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:45:10.789148 kubelet[3068]: E0517 01:45:10.789102 3068 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2" May 17 01:45:10.789148 kubelet[3068]: E0517 01:45:10.789124 3068 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2"} May 17 01:45:10.789216 kubelet[3068]: E0517 01:45:10.789146 3068 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3af411a3-09ba-4381-bce5-19753bf3d671\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 01:45:10.789216 kubelet[3068]: E0517 01:45:10.789164 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3af411a3-09ba-4381-bce5-19753bf3d671\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-p8kz8" podUID="3af411a3-09ba-4381-bce5-19753bf3d671" May 17 01:45:10.789641 containerd[1819]: time="2025-05-17T01:45:10.789620951Z" level=error msg="StopPodSandbox for \"94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4\" failed" error="failed to destroy network for sandbox \"94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:45:10.789694 containerd[1819]: time="2025-05-17T01:45:10.789672127Z" level=error msg="StopPodSandbox for \"2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3\" failed" error="failed to destroy network for sandbox \"2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:45:10.789728 kubelet[3068]: E0517 01:45:10.789696 3068 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4" May 17 01:45:10.789728 kubelet[3068]: E0517 01:45:10.789716 3068 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4"} May 17 01:45:10.789767 kubelet[3068]: E0517 01:45:10.789738 3068 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0bdfd9d7-2db3-41ee-a583-e9b26216debf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 01:45:10.789767 kubelet[3068]: E0517 01:45:10.789749 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0bdfd9d7-2db3-41ee-a583-e9b26216debf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7b6c6ddd44-8lgrj" podUID="0bdfd9d7-2db3-41ee-a583-e9b26216debf" May 17 01:45:10.789767 kubelet[3068]: E0517 01:45:10.789753 3068 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3" May 17 01:45:10.789849 kubelet[3068]: E0517 01:45:10.789769 3068 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3"} May 17 01:45:10.789849 kubelet[3068]: E0517 01:45:10.789784 3068 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1fc7e728-2787-4a01-a1fd-dfaad847d529\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 01:45:10.789849 kubelet[3068]: E0517 01:45:10.789794 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1fc7e728-2787-4a01-a1fd-dfaad847d529\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bf55ffd57-tw6s9" podUID="1fc7e728-2787-4a01-a1fd-dfaad847d529" May 17 01:45:10.790716 containerd[1819]: time="2025-05-17T01:45:10.790674353Z" level=error msg="StopPodSandbox for \"ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f\" failed" error="failed to destroy network for sandbox \"ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:45:10.790753 kubelet[3068]: E0517 01:45:10.790734 3068 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f" May 17 01:45:10.790777 kubelet[3068]: E0517 01:45:10.790756 3068 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f"} May 17 01:45:10.790777 kubelet[3068]: E0517 01:45:10.790771 3068 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 01:45:10.790828 kubelet[3068]: E0517 01:45:10.790781 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-zwf9c" podUID="4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8" May 17 01:45:11.040637 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4-shm.mount: Deactivated successfully. May 17 01:45:11.040685 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f-shm.mount: Deactivated successfully. May 17 01:45:11.040719 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3-shm.mount: Deactivated successfully. May 17 01:45:11.040752 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61-shm.mount: Deactivated successfully. May 17 01:45:11.040784 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c-shm.mount: Deactivated successfully. May 17 01:45:16.590665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2714892501.mount: Deactivated successfully. May 17 01:45:16.607706 containerd[1819]: time="2025-05-17T01:45:16.607684554Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:45:16.607931 containerd[1819]: time="2025-05-17T01:45:16.607908687Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 17 01:45:16.608295 containerd[1819]: time="2025-05-17T01:45:16.608281519Z" level=info msg="ImageCreate event name:\"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:45:16.609262 containerd[1819]: time="2025-05-17T01:45:16.609246891Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:45:16.609675 containerd[1819]: time="2025-05-17T01:45:16.609629847Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"156396234\" in 5.841176843s" May 17 01:45:16.609675 containerd[1819]: time="2025-05-17T01:45:16.609646899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 17 01:45:16.613030 containerd[1819]: time="2025-05-17T01:45:16.612989593Z" level=info msg="CreateContainer within sandbox \"beabbf3e7e9527ebcab2909f09af32e17e0a99e15fcee5717e463cd9118b3bff\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 17 01:45:16.619265 containerd[1819]: time="2025-05-17T01:45:16.619244866Z" level=info msg="CreateContainer within sandbox \"beabbf3e7e9527ebcab2909f09af32e17e0a99e15fcee5717e463cd9118b3bff\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7103eda2d272039751376820e1d7b03c6a8bbc24eba7b9c2783f81faa4fbf293\"" May 17 01:45:16.619560 containerd[1819]: time="2025-05-17T01:45:16.619544655Z" level=info msg="StartContainer for \"7103eda2d272039751376820e1d7b03c6a8bbc24eba7b9c2783f81faa4fbf293\"" May 17 01:45:16.646543 systemd[1]: Started cri-containerd-7103eda2d272039751376820e1d7b03c6a8bbc24eba7b9c2783f81faa4fbf293.scope - libcontainer container 7103eda2d272039751376820e1d7b03c6a8bbc24eba7b9c2783f81faa4fbf293. May 17 01:45:16.661209 containerd[1819]: time="2025-05-17T01:45:16.661186635Z" level=info msg="StartContainer for \"7103eda2d272039751376820e1d7b03c6a8bbc24eba7b9c2783f81faa4fbf293\" returns successfully" May 17 01:45:16.723285 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 17 01:45:16.723338 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 17 01:45:16.813238 containerd[1819]: time="2025-05-17T01:45:16.813144565Z" level=info msg="StopPodSandbox for \"94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4\"" May 17 01:45:16.820510 kubelet[3068]: I0517 01:45:16.820442 3068 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-87d8h" podStartSLOduration=1.031557418 podStartE2EDuration="16.820420389s" podCreationTimestamp="2025-05-17 01:45:00 +0000 UTC" firstStartedPulling="2025-05-17 01:45:00.821092619 +0000 UTC m=+16.177546298" lastFinishedPulling="2025-05-17 01:45:16.609955602 +0000 UTC m=+31.966409269" observedRunningTime="2025-05-17 01:45:16.819846691 +0000 UTC m=+32.176300390" watchObservedRunningTime="2025-05-17 01:45:16.820420389 +0000 UTC m=+32.176874069" May 17 01:45:16.864851 containerd[1819]: 2025-05-17 01:45:16.846 [INFO][4665] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4" May 17 01:45:16.864851 containerd[1819]: 2025-05-17 01:45:16.846 [INFO][4665] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4" iface="eth0" netns="/var/run/netns/cni-d5ceb39a-cb51-3d83-929a-8f4cb1f8043c" May 17 01:45:16.864851 containerd[1819]: 2025-05-17 01:45:16.846 [INFO][4665] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4" iface="eth0" netns="/var/run/netns/cni-d5ceb39a-cb51-3d83-929a-8f4cb1f8043c" May 17 01:45:16.864851 containerd[1819]: 2025-05-17 01:45:16.846 [INFO][4665] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4" iface="eth0" netns="/var/run/netns/cni-d5ceb39a-cb51-3d83-929a-8f4cb1f8043c" May 17 01:45:16.864851 containerd[1819]: 2025-05-17 01:45:16.846 [INFO][4665] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4" May 17 01:45:16.864851 containerd[1819]: 2025-05-17 01:45:16.846 [INFO][4665] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4" May 17 01:45:16.864851 containerd[1819]: 2025-05-17 01:45:16.857 [INFO][4698] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4" HandleID="k8s-pod-network.94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4" Workload="ci--4081.3.3--n--d569167b40-k8s-whisker--7b6c6ddd44--8lgrj-eth0" May 17 01:45:16.864851 containerd[1819]: 2025-05-17 01:45:16.857 [INFO][4698] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:45:16.864851 containerd[1819]: 2025-05-17 01:45:16.857 [INFO][4698] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:45:16.864851 containerd[1819]: 2025-05-17 01:45:16.861 [WARNING][4698] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4" HandleID="k8s-pod-network.94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4" Workload="ci--4081.3.3--n--d569167b40-k8s-whisker--7b6c6ddd44--8lgrj-eth0" May 17 01:45:16.864851 containerd[1819]: 2025-05-17 01:45:16.861 [INFO][4698] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4" HandleID="k8s-pod-network.94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4" Workload="ci--4081.3.3--n--d569167b40-k8s-whisker--7b6c6ddd44--8lgrj-eth0" May 17 01:45:16.864851 containerd[1819]: 2025-05-17 01:45:16.862 [INFO][4698] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:45:16.864851 containerd[1819]: 2025-05-17 01:45:16.863 [INFO][4665] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4" May 17 01:45:16.865162 containerd[1819]: time="2025-05-17T01:45:16.864897298Z" level=info msg="TearDown network for sandbox \"94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4\" successfully" May 17 01:45:16.865162 containerd[1819]: time="2025-05-17T01:45:16.864919540Z" level=info msg="StopPodSandbox for \"94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4\" returns successfully" May 17 01:45:16.941120 kubelet[3068]: I0517 01:45:16.941082 3068 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z77v6\" (UniqueName: \"kubernetes.io/projected/0bdfd9d7-2db3-41ee-a583-e9b26216debf-kube-api-access-z77v6\") pod \"0bdfd9d7-2db3-41ee-a583-e9b26216debf\" (UID: \"0bdfd9d7-2db3-41ee-a583-e9b26216debf\") " May 17 01:45:16.941244 kubelet[3068]: I0517 01:45:16.941138 3068 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0bdfd9d7-2db3-41ee-a583-e9b26216debf-whisker-backend-key-pair\") pod \"0bdfd9d7-2db3-41ee-a583-e9b26216debf\" (UID: \"0bdfd9d7-2db3-41ee-a583-e9b26216debf\") " May 17 01:45:16.941244 kubelet[3068]: I0517 01:45:16.941173 3068 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0bdfd9d7-2db3-41ee-a583-e9b26216debf-whisker-ca-bundle\") pod \"0bdfd9d7-2db3-41ee-a583-e9b26216debf\" (UID: \"0bdfd9d7-2db3-41ee-a583-e9b26216debf\") " May 17 01:45:16.941577 kubelet[3068]: I0517 01:45:16.941557 3068 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bdfd9d7-2db3-41ee-a583-e9b26216debf-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "0bdfd9d7-2db3-41ee-a583-e9b26216debf" (UID: "0bdfd9d7-2db3-41ee-a583-e9b26216debf"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 01:45:16.942969 kubelet[3068]: I0517 01:45:16.942957 3068 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bdfd9d7-2db3-41ee-a583-e9b26216debf-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "0bdfd9d7-2db3-41ee-a583-e9b26216debf" (UID: "0bdfd9d7-2db3-41ee-a583-e9b26216debf"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 01:45:16.942969 kubelet[3068]: I0517 01:45:16.942960 3068 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bdfd9d7-2db3-41ee-a583-e9b26216debf-kube-api-access-z77v6" (OuterVolumeSpecName: "kube-api-access-z77v6") pod "0bdfd9d7-2db3-41ee-a583-e9b26216debf" (UID: "0bdfd9d7-2db3-41ee-a583-e9b26216debf"). InnerVolumeSpecName "kube-api-access-z77v6". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 01:45:17.042207 kubelet[3068]: I0517 01:45:17.042094 3068 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z77v6\" (UniqueName: \"kubernetes.io/projected/0bdfd9d7-2db3-41ee-a583-e9b26216debf-kube-api-access-z77v6\") on node \"ci-4081.3.3-n-d569167b40\" DevicePath \"\"" May 17 01:45:17.042207 kubelet[3068]: I0517 01:45:17.042161 3068 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0bdfd9d7-2db3-41ee-a583-e9b26216debf-whisker-backend-key-pair\") on node \"ci-4081.3.3-n-d569167b40\" DevicePath \"\"" May 17 01:45:17.042207 kubelet[3068]: I0517 01:45:17.042190 3068 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0bdfd9d7-2db3-41ee-a583-e9b26216debf-whisker-ca-bundle\") on node \"ci-4081.3.3-n-d569167b40\" DevicePath \"\"" May 17 01:45:17.595673 systemd[1]: run-netns-cni\x2dd5ceb39a\x2dcb51\x2d3d83\x2d929a\x2d8f4cb1f8043c.mount: Deactivated successfully. May 17 01:45:17.595737 systemd[1]: var-lib-kubelet-pods-0bdfd9d7\x2d2db3\x2d41ee\x2da583\x2de9b26216debf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz77v6.mount: Deactivated successfully. May 17 01:45:17.595779 systemd[1]: var-lib-kubelet-pods-0bdfd9d7\x2d2db3\x2d41ee\x2da583\x2de9b26216debf-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 17 01:45:17.806432 systemd[1]: Removed slice kubepods-besteffort-pod0bdfd9d7_2db3_41ee_a583_e9b26216debf.slice - libcontainer container kubepods-besteffort-pod0bdfd9d7_2db3_41ee_a583_e9b26216debf.slice. May 17 01:45:17.829944 systemd[1]: Created slice kubepods-besteffort-podb211c981_6173_4ca8_aa53_cf31a5319b90.slice - libcontainer container kubepods-besteffort-podb211c981_6173_4ca8_aa53_cf31a5319b90.slice. May 17 01:45:17.948376 kubelet[3068]: I0517 01:45:17.948279 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b211c981-6173-4ca8-aa53-cf31a5319b90-whisker-ca-bundle\") pod \"whisker-757f968694-hv974\" (UID: \"b211c981-6173-4ca8-aa53-cf31a5319b90\") " pod="calico-system/whisker-757f968694-hv974" May 17 01:45:17.948376 kubelet[3068]: I0517 01:45:17.948319 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md4bp\" (UniqueName: \"kubernetes.io/projected/b211c981-6173-4ca8-aa53-cf31a5319b90-kube-api-access-md4bp\") pod \"whisker-757f968694-hv974\" (UID: \"b211c981-6173-4ca8-aa53-cf31a5319b90\") " pod="calico-system/whisker-757f968694-hv974" May 17 01:45:17.948376 kubelet[3068]: I0517 01:45:17.948347 3068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b211c981-6173-4ca8-aa53-cf31a5319b90-whisker-backend-key-pair\") pod \"whisker-757f968694-hv974\" (UID: \"b211c981-6173-4ca8-aa53-cf31a5319b90\") " pod="calico-system/whisker-757f968694-hv974" May 17 01:45:18.133358 containerd[1819]: time="2025-05-17T01:45:18.133214572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-757f968694-hv974,Uid:b211c981-6173-4ca8-aa53-cf31a5319b90,Namespace:calico-system,Attempt:0,}" May 17 01:45:18.196758 systemd-networkd[1605]: cali17ad2abce04: Link UP May 17 01:45:18.197052 systemd-networkd[1605]: cali17ad2abce04: Gained carrier May 17 01:45:18.205451 containerd[1819]: 2025-05-17 01:45:18.150 [INFO][4890] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 01:45:18.205451 containerd[1819]: 2025-05-17 01:45:18.157 [INFO][4890] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--d569167b40-k8s-whisker--757f968694--hv974-eth0 whisker-757f968694- calico-system b211c981-6173-4ca8-aa53-cf31a5319b90 857 0 2025-05-17 01:45:17 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:757f968694 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.3-n-d569167b40 whisker-757f968694-hv974 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali17ad2abce04 [] [] }} ContainerID="df62a437cac16c5fbcaed6d2fe0e953bda8882c089192ba25f13f9431eb21841" Namespace="calico-system" Pod="whisker-757f968694-hv974" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-whisker--757f968694--hv974-" May 17 01:45:18.205451 containerd[1819]: 2025-05-17 01:45:18.157 [INFO][4890] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="df62a437cac16c5fbcaed6d2fe0e953bda8882c089192ba25f13f9431eb21841" Namespace="calico-system" Pod="whisker-757f968694-hv974" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-whisker--757f968694--hv974-eth0" May 17 01:45:18.205451 containerd[1819]: 2025-05-17 01:45:18.172 [INFO][4911] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="df62a437cac16c5fbcaed6d2fe0e953bda8882c089192ba25f13f9431eb21841" HandleID="k8s-pod-network.df62a437cac16c5fbcaed6d2fe0e953bda8882c089192ba25f13f9431eb21841" Workload="ci--4081.3.3--n--d569167b40-k8s-whisker--757f968694--hv974-eth0" May 17 01:45:18.205451 containerd[1819]: 2025-05-17 01:45:18.172 [INFO][4911] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="df62a437cac16c5fbcaed6d2fe0e953bda8882c089192ba25f13f9431eb21841" HandleID="k8s-pod-network.df62a437cac16c5fbcaed6d2fe0e953bda8882c089192ba25f13f9431eb21841" Workload="ci--4081.3.3--n--d569167b40-k8s-whisker--757f968694--hv974-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bbe70), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-n-d569167b40", "pod":"whisker-757f968694-hv974", "timestamp":"2025-05-17 01:45:18.172326372 +0000 UTC"}, Hostname:"ci-4081.3.3-n-d569167b40", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 01:45:18.205451 containerd[1819]: 2025-05-17 01:45:18.172 [INFO][4911] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:45:18.205451 containerd[1819]: 2025-05-17 01:45:18.172 [INFO][4911] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:45:18.205451 containerd[1819]: 2025-05-17 01:45:18.172 [INFO][4911] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-d569167b40' May 17 01:45:18.205451 containerd[1819]: 2025-05-17 01:45:18.176 [INFO][4911] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.df62a437cac16c5fbcaed6d2fe0e953bda8882c089192ba25f13f9431eb21841" host="ci-4081.3.3-n-d569167b40" May 17 01:45:18.205451 containerd[1819]: 2025-05-17 01:45:18.179 [INFO][4911] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-d569167b40" May 17 01:45:18.205451 containerd[1819]: 2025-05-17 01:45:18.181 [INFO][4911] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="ci-4081.3.3-n-d569167b40" May 17 01:45:18.205451 containerd[1819]: 2025-05-17 01:45:18.182 [INFO][4911] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="ci-4081.3.3-n-d569167b40" May 17 01:45:18.205451 containerd[1819]: 2025-05-17 01:45:18.183 [INFO][4911] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="ci-4081.3.3-n-d569167b40" May 17 01:45:18.205451 containerd[1819]: 2025-05-17 01:45:18.184 [INFO][4911] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.df62a437cac16c5fbcaed6d2fe0e953bda8882c089192ba25f13f9431eb21841" host="ci-4081.3.3-n-d569167b40" May 17 01:45:18.205451 containerd[1819]: 2025-05-17 01:45:18.184 [INFO][4911] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.df62a437cac16c5fbcaed6d2fe0e953bda8882c089192ba25f13f9431eb21841 May 17 01:45:18.205451 containerd[1819]: 2025-05-17 01:45:18.187 [INFO][4911] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.df62a437cac16c5fbcaed6d2fe0e953bda8882c089192ba25f13f9431eb21841" host="ci-4081.3.3-n-d569167b40" May 17 01:45:18.205451 containerd[1819]: 2025-05-17 01:45:18.190 [INFO][4911] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.df62a437cac16c5fbcaed6d2fe0e953bda8882c089192ba25f13f9431eb21841" host="ci-4081.3.3-n-d569167b40" May 17 01:45:18.205451 containerd[1819]: 2025-05-17 01:45:18.190 [INFO][4911] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.df62a437cac16c5fbcaed6d2fe0e953bda8882c089192ba25f13f9431eb21841" host="ci-4081.3.3-n-d569167b40" May 17 01:45:18.205451 containerd[1819]: 2025-05-17 01:45:18.190 [INFO][4911] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:45:18.205451 containerd[1819]: 2025-05-17 01:45:18.190 [INFO][4911] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="df62a437cac16c5fbcaed6d2fe0e953bda8882c089192ba25f13f9431eb21841" HandleID="k8s-pod-network.df62a437cac16c5fbcaed6d2fe0e953bda8882c089192ba25f13f9431eb21841" Workload="ci--4081.3.3--n--d569167b40-k8s-whisker--757f968694--hv974-eth0" May 17 01:45:18.206132 containerd[1819]: 2025-05-17 01:45:18.191 [INFO][4890] cni-plugin/k8s.go 418: Populated endpoint ContainerID="df62a437cac16c5fbcaed6d2fe0e953bda8882c089192ba25f13f9431eb21841" Namespace="calico-system" Pod="whisker-757f968694-hv974" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-whisker--757f968694--hv974-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--d569167b40-k8s-whisker--757f968694--hv974-eth0", GenerateName:"whisker-757f968694-", Namespace:"calico-system", SelfLink:"", UID:"b211c981-6173-4ca8-aa53-cf31a5319b90", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 45, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"757f968694", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-d569167b40", ContainerID:"", Pod:"whisker-757f968694-hv974", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali17ad2abce04", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:45:18.206132 containerd[1819]: 2025-05-17 01:45:18.191 [INFO][4890] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="df62a437cac16c5fbcaed6d2fe0e953bda8882c089192ba25f13f9431eb21841" Namespace="calico-system" Pod="whisker-757f968694-hv974" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-whisker--757f968694--hv974-eth0" May 17 01:45:18.206132 containerd[1819]: 2025-05-17 01:45:18.191 [INFO][4890] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali17ad2abce04 ContainerID="df62a437cac16c5fbcaed6d2fe0e953bda8882c089192ba25f13f9431eb21841" Namespace="calico-system" Pod="whisker-757f968694-hv974" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-whisker--757f968694--hv974-eth0" May 17 01:45:18.206132 containerd[1819]: 2025-05-17 01:45:18.197 [INFO][4890] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="df62a437cac16c5fbcaed6d2fe0e953bda8882c089192ba25f13f9431eb21841" Namespace="calico-system" Pod="whisker-757f968694-hv974" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-whisker--757f968694--hv974-eth0" May 17 01:45:18.206132 containerd[1819]: 2025-05-17 01:45:18.197 [INFO][4890] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="df62a437cac16c5fbcaed6d2fe0e953bda8882c089192ba25f13f9431eb21841" Namespace="calico-system" Pod="whisker-757f968694-hv974" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-whisker--757f968694--hv974-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--d569167b40-k8s-whisker--757f968694--hv974-eth0", GenerateName:"whisker-757f968694-", Namespace:"calico-system", SelfLink:"", UID:"b211c981-6173-4ca8-aa53-cf31a5319b90", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 45, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"757f968694", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-d569167b40", ContainerID:"df62a437cac16c5fbcaed6d2fe0e953bda8882c089192ba25f13f9431eb21841", Pod:"whisker-757f968694-hv974", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali17ad2abce04", MAC:"02:1a:e9:f3:e0:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:45:18.206132 containerd[1819]: 2025-05-17 01:45:18.203 [INFO][4890] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="df62a437cac16c5fbcaed6d2fe0e953bda8882c089192ba25f13f9431eb21841" Namespace="calico-system" Pod="whisker-757f968694-hv974" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-whisker--757f968694--hv974-eth0" May 17 01:45:18.214268 containerd[1819]: time="2025-05-17T01:45:18.214219998Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:45:18.214486 containerd[1819]: time="2025-05-17T01:45:18.214466114Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:45:18.214486 containerd[1819]: time="2025-05-17T01:45:18.214479409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:45:18.214544 containerd[1819]: time="2025-05-17T01:45:18.214528800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:45:18.233564 systemd[1]: Started cri-containerd-df62a437cac16c5fbcaed6d2fe0e953bda8882c089192ba25f13f9431eb21841.scope - libcontainer container df62a437cac16c5fbcaed6d2fe0e953bda8882c089192ba25f13f9431eb21841. May 17 01:45:18.261828 containerd[1819]: time="2025-05-17T01:45:18.261774722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-757f968694-hv974,Uid:b211c981-6173-4ca8-aa53-cf31a5319b90,Namespace:calico-system,Attempt:0,} returns sandbox id \"df62a437cac16c5fbcaed6d2fe0e953bda8882c089192ba25f13f9431eb21841\"" May 17 01:45:18.262785 containerd[1819]: time="2025-05-17T01:45:18.262767989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 01:45:18.586757 containerd[1819]: time="2025-05-17T01:45:18.586658731Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 01:45:18.587638 containerd[1819]: time="2025-05-17T01:45:18.587551562Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 01:45:18.587638 containerd[1819]: time="2025-05-17T01:45:18.587626741Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 01:45:18.587841 kubelet[3068]: E0517 01:45:18.587788 3068 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 01:45:18.587841 kubelet[3068]: E0517 01:45:18.587824 3068 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 01:45:18.588072 kubelet[3068]: E0517 01:45:18.587987 3068 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:92a8012b5750456bb9056172472acd21,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-md4bp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-757f968694-hv974_calico-system(b211c981-6173-4ca8-aa53-cf31a5319b90): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 01:45:18.589905 containerd[1819]: time="2025-05-17T01:45:18.589892088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 01:45:18.702041 kubelet[3068]: I0517 01:45:18.701971 3068 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bdfd9d7-2db3-41ee-a583-e9b26216debf" path="/var/lib/kubelet/pods/0bdfd9d7-2db3-41ee-a583-e9b26216debf/volumes" May 17 01:45:18.901774 containerd[1819]: time="2025-05-17T01:45:18.901516305Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 01:45:18.902587 containerd[1819]: time="2025-05-17T01:45:18.902560730Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 01:45:18.902904 containerd[1819]: time="2025-05-17T01:45:18.902866505Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 01:45:18.903029 kubelet[3068]: E0517 01:45:18.902997 3068 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 01:45:18.903132 kubelet[3068]: E0517 01:45:18.903044 3068 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 01:45:18.903183 kubelet[3068]: E0517 01:45:18.903150 3068 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-md4bp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-757f968694-hv974_calico-system(b211c981-6173-4ca8-aa53-cf31a5319b90): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 01:45:18.904440 kubelet[3068]: E0517 01:45:18.904420 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-757f968694-hv974" podUID="b211c981-6173-4ca8-aa53-cf31a5319b90" May 17 01:45:19.806823 kubelet[3068]: E0517 01:45:19.806689 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-757f968694-hv974" podUID="b211c981-6173-4ca8-aa53-cf31a5319b90" May 17 01:45:20.159495 systemd-networkd[1605]: cali17ad2abce04: Gained IPv6LL May 17 01:45:22.697040 containerd[1819]: time="2025-05-17T01:45:22.696940242Z" level=info msg="StopPodSandbox for \"2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3\"" May 17 01:45:22.697988 containerd[1819]: time="2025-05-17T01:45:22.697129937Z" level=info msg="StopPodSandbox for \"96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a\"" May 17 01:45:22.740187 containerd[1819]: 2025-05-17 01:45:22.723 [INFO][5186] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a" May 17 01:45:22.740187 containerd[1819]: 2025-05-17 01:45:22.723 [INFO][5186] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a" iface="eth0" netns="/var/run/netns/cni-2c10cf63-8dd0-5253-fc0b-3ae7357a999a" May 17 01:45:22.740187 containerd[1819]: 2025-05-17 01:45:22.724 [INFO][5186] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a" iface="eth0" netns="/var/run/netns/cni-2c10cf63-8dd0-5253-fc0b-3ae7357a999a" May 17 01:45:22.740187 containerd[1819]: 2025-05-17 01:45:22.724 [INFO][5186] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a" iface="eth0" netns="/var/run/netns/cni-2c10cf63-8dd0-5253-fc0b-3ae7357a999a" May 17 01:45:22.740187 containerd[1819]: 2025-05-17 01:45:22.724 [INFO][5186] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a" May 17 01:45:22.740187 containerd[1819]: 2025-05-17 01:45:22.724 [INFO][5186] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a" May 17 01:45:22.740187 containerd[1819]: 2025-05-17 01:45:22.734 [INFO][5224] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a" HandleID="k8s-pod-network.96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a" Workload="ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--574w6-eth0" May 17 01:45:22.740187 containerd[1819]: 2025-05-17 01:45:22.734 [INFO][5224] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:45:22.740187 containerd[1819]: 2025-05-17 01:45:22.734 [INFO][5224] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:45:22.740187 containerd[1819]: 2025-05-17 01:45:22.738 [WARNING][5224] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a" HandleID="k8s-pod-network.96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a" Workload="ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--574w6-eth0" May 17 01:45:22.740187 containerd[1819]: 2025-05-17 01:45:22.738 [INFO][5224] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a" HandleID="k8s-pod-network.96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a" Workload="ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--574w6-eth0" May 17 01:45:22.740187 containerd[1819]: 2025-05-17 01:45:22.739 [INFO][5224] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:45:22.740187 containerd[1819]: 2025-05-17 01:45:22.739 [INFO][5186] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a" May 17 01:45:22.740515 containerd[1819]: time="2025-05-17T01:45:22.740266539Z" level=info msg="TearDown network for sandbox \"96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a\" successfully" May 17 01:45:22.740515 containerd[1819]: time="2025-05-17T01:45:22.740293593Z" level=info msg="StopPodSandbox for \"96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a\" returns successfully" May 17 01:45:22.740763 containerd[1819]: time="2025-05-17T01:45:22.740721972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-574w6,Uid:22e0dd7b-458b-49cc-aec7-e6a2e03d9deb,Namespace:kube-system,Attempt:1,}" May 17 01:45:22.742041 systemd[1]: run-netns-cni\x2d2c10cf63\x2d8dd0\x2d5253\x2dfc0b\x2d3ae7357a999a.mount: Deactivated successfully. May 17 01:45:22.745477 containerd[1819]: 2025-05-17 01:45:22.723 [INFO][5187] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3" May 17 01:45:22.745477 containerd[1819]: 2025-05-17 01:45:22.723 [INFO][5187] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3" iface="eth0" netns="/var/run/netns/cni-873202db-d7ba-ea8b-7be9-e639a7213508" May 17 01:45:22.745477 containerd[1819]: 2025-05-17 01:45:22.723 [INFO][5187] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3" iface="eth0" netns="/var/run/netns/cni-873202db-d7ba-ea8b-7be9-e639a7213508" May 17 01:45:22.745477 containerd[1819]: 2025-05-17 01:45:22.724 [INFO][5187] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3" iface="eth0" netns="/var/run/netns/cni-873202db-d7ba-ea8b-7be9-e639a7213508" May 17 01:45:22.745477 containerd[1819]: 2025-05-17 01:45:22.724 [INFO][5187] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3" May 17 01:45:22.745477 containerd[1819]: 2025-05-17 01:45:22.724 [INFO][5187] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3" May 17 01:45:22.745477 containerd[1819]: 2025-05-17 01:45:22.734 [INFO][5222] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3" HandleID="k8s-pod-network.2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3" Workload="ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--tw6s9-eth0" May 17 01:45:22.745477 containerd[1819]: 2025-05-17 01:45:22.734 [INFO][5222] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:45:22.745477 containerd[1819]: 2025-05-17 01:45:22.739 [INFO][5222] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:45:22.745477 containerd[1819]: 2025-05-17 01:45:22.742 [WARNING][5222] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3" HandleID="k8s-pod-network.2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3" Workload="ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--tw6s9-eth0" May 17 01:45:22.745477 containerd[1819]: 2025-05-17 01:45:22.742 [INFO][5222] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3" HandleID="k8s-pod-network.2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3" Workload="ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--tw6s9-eth0" May 17 01:45:22.745477 containerd[1819]: 2025-05-17 01:45:22.743 [INFO][5222] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:45:22.745477 containerd[1819]: 2025-05-17 01:45:22.744 [INFO][5187] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3" May 17 01:45:22.745837 containerd[1819]: time="2025-05-17T01:45:22.745559797Z" level=info msg="TearDown network for sandbox \"2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3\" successfully" May 17 01:45:22.745837 containerd[1819]: time="2025-05-17T01:45:22.745577416Z" level=info msg="StopPodSandbox for \"2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3\" returns successfully" May 17 01:45:22.745954 containerd[1819]: time="2025-05-17T01:45:22.745941385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bf55ffd57-tw6s9,Uid:1fc7e728-2787-4a01-a1fd-dfaad847d529,Namespace:calico-apiserver,Attempt:1,}" May 17 01:45:22.752969 systemd[1]: run-netns-cni\x2d873202db\x2dd7ba\x2dea8b\x2d7be9\x2de639a7213508.mount: Deactivated successfully. May 17 01:45:22.793056 systemd-networkd[1605]: calic15b171384e: Link UP May 17 01:45:22.793190 systemd-networkd[1605]: calic15b171384e: Gained carrier May 17 01:45:22.799996 containerd[1819]: 2025-05-17 01:45:22.756 [INFO][5257] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 01:45:22.799996 containerd[1819]: 2025-05-17 01:45:22.764 [INFO][5257] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--574w6-eth0 coredns-668d6bf9bc- kube-system 22e0dd7b-458b-49cc-aec7-e6a2e03d9deb 890 0 2025-05-17 01:44:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.3-n-d569167b40 coredns-668d6bf9bc-574w6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic15b171384e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="959e356d0636c1e7fb8de7b79d808e0d91c0b5fd9503e6b6ebe8ffdbf71b4303" Namespace="kube-system" Pod="coredns-668d6bf9bc-574w6" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--574w6-" May 17 01:45:22.799996 containerd[1819]: 2025-05-17 01:45:22.764 [INFO][5257] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="959e356d0636c1e7fb8de7b79d808e0d91c0b5fd9503e6b6ebe8ffdbf71b4303" Namespace="kube-system" Pod="coredns-668d6bf9bc-574w6" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--574w6-eth0" May 17 01:45:22.799996 containerd[1819]: 2025-05-17 01:45:22.775 [INFO][5303] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="959e356d0636c1e7fb8de7b79d808e0d91c0b5fd9503e6b6ebe8ffdbf71b4303" HandleID="k8s-pod-network.959e356d0636c1e7fb8de7b79d808e0d91c0b5fd9503e6b6ebe8ffdbf71b4303" Workload="ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--574w6-eth0" May 17 01:45:22.799996 containerd[1819]: 2025-05-17 01:45:22.775 [INFO][5303] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="959e356d0636c1e7fb8de7b79d808e0d91c0b5fd9503e6b6ebe8ffdbf71b4303" HandleID="k8s-pod-network.959e356d0636c1e7fb8de7b79d808e0d91c0b5fd9503e6b6ebe8ffdbf71b4303" Workload="ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--574w6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd840), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.3-n-d569167b40", "pod":"coredns-668d6bf9bc-574w6", "timestamp":"2025-05-17 01:45:22.775444568 +0000 UTC"}, Hostname:"ci-4081.3.3-n-d569167b40", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 01:45:22.799996 containerd[1819]: 2025-05-17 01:45:22.775 [INFO][5303] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:45:22.799996 containerd[1819]: 2025-05-17 01:45:22.775 [INFO][5303] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:45:22.799996 containerd[1819]: 2025-05-17 01:45:22.775 [INFO][5303] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-d569167b40' May 17 01:45:22.799996 containerd[1819]: 2025-05-17 01:45:22.779 [INFO][5303] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.959e356d0636c1e7fb8de7b79d808e0d91c0b5fd9503e6b6ebe8ffdbf71b4303" host="ci-4081.3.3-n-d569167b40" May 17 01:45:22.799996 containerd[1819]: 2025-05-17 01:45:22.781 [INFO][5303] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-d569167b40" May 17 01:45:22.799996 containerd[1819]: 2025-05-17 01:45:22.784 [INFO][5303] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="ci-4081.3.3-n-d569167b40" May 17 01:45:22.799996 containerd[1819]: 2025-05-17 01:45:22.785 [INFO][5303] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="ci-4081.3.3-n-d569167b40" May 17 01:45:22.799996 containerd[1819]: 2025-05-17 01:45:22.786 [INFO][5303] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="ci-4081.3.3-n-d569167b40" May 17 01:45:22.799996 containerd[1819]: 2025-05-17 01:45:22.786 [INFO][5303] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.959e356d0636c1e7fb8de7b79d808e0d91c0b5fd9503e6b6ebe8ffdbf71b4303" host="ci-4081.3.3-n-d569167b40" May 17 01:45:22.799996 containerd[1819]: 2025-05-17 01:45:22.786 [INFO][5303] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.959e356d0636c1e7fb8de7b79d808e0d91c0b5fd9503e6b6ebe8ffdbf71b4303 May 17 01:45:22.799996 containerd[1819]: 2025-05-17 01:45:22.788 [INFO][5303] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.959e356d0636c1e7fb8de7b79d808e0d91c0b5fd9503e6b6ebe8ffdbf71b4303" host="ci-4081.3.3-n-d569167b40" May 17 01:45:22.799996 containerd[1819]: 2025-05-17 01:45:22.791 [INFO][5303] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.959e356d0636c1e7fb8de7b79d808e0d91c0b5fd9503e6b6ebe8ffdbf71b4303" host="ci-4081.3.3-n-d569167b40" May 17 01:45:22.799996 containerd[1819]: 2025-05-17 01:45:22.791 [INFO][5303] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.959e356d0636c1e7fb8de7b79d808e0d91c0b5fd9503e6b6ebe8ffdbf71b4303" host="ci-4081.3.3-n-d569167b40" May 17 01:45:22.799996 containerd[1819]: 2025-05-17 01:45:22.791 [INFO][5303] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:45:22.799996 containerd[1819]: 2025-05-17 01:45:22.791 [INFO][5303] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="959e356d0636c1e7fb8de7b79d808e0d91c0b5fd9503e6b6ebe8ffdbf71b4303" HandleID="k8s-pod-network.959e356d0636c1e7fb8de7b79d808e0d91c0b5fd9503e6b6ebe8ffdbf71b4303" Workload="ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--574w6-eth0" May 17 01:45:22.800499 containerd[1819]: 2025-05-17 01:45:22.792 [INFO][5257] cni-plugin/k8s.go 418: Populated endpoint ContainerID="959e356d0636c1e7fb8de7b79d808e0d91c0b5fd9503e6b6ebe8ffdbf71b4303" Namespace="kube-system" Pod="coredns-668d6bf9bc-574w6" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--574w6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--574w6-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"22e0dd7b-458b-49cc-aec7-e6a2e03d9deb", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 44, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-d569167b40", ContainerID:"", Pod:"coredns-668d6bf9bc-574w6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic15b171384e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:45:22.800499 containerd[1819]: 2025-05-17 01:45:22.792 [INFO][5257] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="959e356d0636c1e7fb8de7b79d808e0d91c0b5fd9503e6b6ebe8ffdbf71b4303" Namespace="kube-system" Pod="coredns-668d6bf9bc-574w6" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--574w6-eth0" May 17 01:45:22.800499 containerd[1819]: 2025-05-17 01:45:22.792 [INFO][5257] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic15b171384e ContainerID="959e356d0636c1e7fb8de7b79d808e0d91c0b5fd9503e6b6ebe8ffdbf71b4303" Namespace="kube-system" Pod="coredns-668d6bf9bc-574w6" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--574w6-eth0" May 17 01:45:22.800499 containerd[1819]: 2025-05-17 01:45:22.793 [INFO][5257] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="959e356d0636c1e7fb8de7b79d808e0d91c0b5fd9503e6b6ebe8ffdbf71b4303" Namespace="kube-system" Pod="coredns-668d6bf9bc-574w6" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--574w6-eth0" May 17 01:45:22.800499 containerd[1819]: 2025-05-17 01:45:22.793 [INFO][5257] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="959e356d0636c1e7fb8de7b79d808e0d91c0b5fd9503e6b6ebe8ffdbf71b4303" Namespace="kube-system" Pod="coredns-668d6bf9bc-574w6" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--574w6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--574w6-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"22e0dd7b-458b-49cc-aec7-e6a2e03d9deb", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 44, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-d569167b40", ContainerID:"959e356d0636c1e7fb8de7b79d808e0d91c0b5fd9503e6b6ebe8ffdbf71b4303", Pod:"coredns-668d6bf9bc-574w6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic15b171384e", MAC:"36:1d:92:19:ac:6b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:45:22.800499 containerd[1819]: 2025-05-17 01:45:22.798 [INFO][5257] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="959e356d0636c1e7fb8de7b79d808e0d91c0b5fd9503e6b6ebe8ffdbf71b4303" Namespace="kube-system" Pod="coredns-668d6bf9bc-574w6" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--574w6-eth0" May 17 01:45:22.808354 containerd[1819]: time="2025-05-17T01:45:22.808279355Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:45:22.808354 containerd[1819]: time="2025-05-17T01:45:22.808308841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:45:22.808354 containerd[1819]: time="2025-05-17T01:45:22.808316030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:45:22.808505 containerd[1819]: time="2025-05-17T01:45:22.808356822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:45:22.827432 systemd[1]: Started cri-containerd-959e356d0636c1e7fb8de7b79d808e0d91c0b5fd9503e6b6ebe8ffdbf71b4303.scope - libcontainer container 959e356d0636c1e7fb8de7b79d808e0d91c0b5fd9503e6b6ebe8ffdbf71b4303. May 17 01:45:22.852114 containerd[1819]: time="2025-05-17T01:45:22.852089793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-574w6,Uid:22e0dd7b-458b-49cc-aec7-e6a2e03d9deb,Namespace:kube-system,Attempt:1,} returns sandbox id \"959e356d0636c1e7fb8de7b79d808e0d91c0b5fd9503e6b6ebe8ffdbf71b4303\"" May 17 01:45:22.853362 containerd[1819]: time="2025-05-17T01:45:22.853319193Z" level=info msg="CreateContainer within sandbox \"959e356d0636c1e7fb8de7b79d808e0d91c0b5fd9503e6b6ebe8ffdbf71b4303\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 01:45:22.857714 containerd[1819]: time="2025-05-17T01:45:22.857698145Z" level=info msg="CreateContainer within sandbox \"959e356d0636c1e7fb8de7b79d808e0d91c0b5fd9503e6b6ebe8ffdbf71b4303\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"95937ff38b66ee8c880382630c8418eeaa7cb57800da6e8cfb2e6d5d335e00e8\"" May 17 01:45:22.857900 containerd[1819]: time="2025-05-17T01:45:22.857886402Z" level=info msg="StartContainer for \"95937ff38b66ee8c880382630c8418eeaa7cb57800da6e8cfb2e6d5d335e00e8\"" May 17 01:45:22.884398 systemd[1]: Started cri-containerd-95937ff38b66ee8c880382630c8418eeaa7cb57800da6e8cfb2e6d5d335e00e8.scope - libcontainer container 95937ff38b66ee8c880382630c8418eeaa7cb57800da6e8cfb2e6d5d335e00e8. May 17 01:45:22.898830 systemd-networkd[1605]: calibeef9903bb8: Link UP May 17 01:45:22.899176 systemd-networkd[1605]: calibeef9903bb8: Gained carrier May 17 01:45:22.901000 containerd[1819]: time="2025-05-17T01:45:22.900969951Z" level=info msg="StartContainer for \"95937ff38b66ee8c880382630c8418eeaa7cb57800da6e8cfb2e6d5d335e00e8\" returns successfully" May 17 01:45:22.909719 containerd[1819]: 2025-05-17 01:45:22.760 [INFO][5266] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 01:45:22.909719 containerd[1819]: 2025-05-17 01:45:22.766 [INFO][5266] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--tw6s9-eth0 calico-apiserver-bf55ffd57- calico-apiserver 1fc7e728-2787-4a01-a1fd-dfaad847d529 889 0 2025-05-17 01:44:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:bf55ffd57 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-n-d569167b40 calico-apiserver-bf55ffd57-tw6s9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibeef9903bb8 [] [] }} ContainerID="0d6538a18c6d68986875f39200d6baa680bd5625acdd51a4c09b80db9b243db4" Namespace="calico-apiserver" Pod="calico-apiserver-bf55ffd57-tw6s9" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--tw6s9-" May 17 01:45:22.909719 containerd[1819]: 2025-05-17 01:45:22.766 [INFO][5266] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0d6538a18c6d68986875f39200d6baa680bd5625acdd51a4c09b80db9b243db4" Namespace="calico-apiserver" Pod="calico-apiserver-bf55ffd57-tw6s9" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--tw6s9-eth0" May 17 01:45:22.909719 containerd[1819]: 2025-05-17 01:45:22.776 [INFO][5308] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0d6538a18c6d68986875f39200d6baa680bd5625acdd51a4c09b80db9b243db4" HandleID="k8s-pod-network.0d6538a18c6d68986875f39200d6baa680bd5625acdd51a4c09b80db9b243db4" Workload="ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--tw6s9-eth0" May 17 01:45:22.909719 containerd[1819]: 2025-05-17 01:45:22.776 [INFO][5308] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0d6538a18c6d68986875f39200d6baa680bd5625acdd51a4c09b80db9b243db4" HandleID="k8s-pod-network.0d6538a18c6d68986875f39200d6baa680bd5625acdd51a4c09b80db9b243db4" Workload="ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--tw6s9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f600), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-n-d569167b40", "pod":"calico-apiserver-bf55ffd57-tw6s9", "timestamp":"2025-05-17 01:45:22.776838612 +0000 UTC"}, Hostname:"ci-4081.3.3-n-d569167b40", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 01:45:22.909719 containerd[1819]: 2025-05-17 01:45:22.776 [INFO][5308] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:45:22.909719 containerd[1819]: 2025-05-17 01:45:22.791 [INFO][5308] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:45:22.909719 containerd[1819]: 2025-05-17 01:45:22.791 [INFO][5308] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-d569167b40' May 17 01:45:22.909719 containerd[1819]: 2025-05-17 01:45:22.880 [INFO][5308] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0d6538a18c6d68986875f39200d6baa680bd5625acdd51a4c09b80db9b243db4" host="ci-4081.3.3-n-d569167b40" May 17 01:45:22.909719 containerd[1819]: 2025-05-17 01:45:22.883 [INFO][5308] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-d569167b40" May 17 01:45:22.909719 containerd[1819]: 2025-05-17 01:45:22.886 [INFO][5308] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="ci-4081.3.3-n-d569167b40" May 17 01:45:22.909719 containerd[1819]: 2025-05-17 01:45:22.887 [INFO][5308] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="ci-4081.3.3-n-d569167b40" May 17 01:45:22.909719 containerd[1819]: 2025-05-17 01:45:22.888 [INFO][5308] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="ci-4081.3.3-n-d569167b40" May 17 01:45:22.909719 containerd[1819]: 2025-05-17 01:45:22.888 [INFO][5308] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0d6538a18c6d68986875f39200d6baa680bd5625acdd51a4c09b80db9b243db4" host="ci-4081.3.3-n-d569167b40" May 17 01:45:22.909719 containerd[1819]: 2025-05-17 01:45:22.889 [INFO][5308] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0d6538a18c6d68986875f39200d6baa680bd5625acdd51a4c09b80db9b243db4 May 17 01:45:22.909719 containerd[1819]: 2025-05-17 01:45:22.892 [INFO][5308] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0d6538a18c6d68986875f39200d6baa680bd5625acdd51a4c09b80db9b243db4" host="ci-4081.3.3-n-d569167b40" May 17 01:45:22.909719 containerd[1819]: 2025-05-17 01:45:22.896 [INFO][5308] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.0d6538a18c6d68986875f39200d6baa680bd5625acdd51a4c09b80db9b243db4" host="ci-4081.3.3-n-d569167b40" May 17 01:45:22.909719 containerd[1819]: 2025-05-17 01:45:22.896 [INFO][5308] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.0d6538a18c6d68986875f39200d6baa680bd5625acdd51a4c09b80db9b243db4" host="ci-4081.3.3-n-d569167b40" May 17 01:45:22.909719 containerd[1819]: 2025-05-17 01:45:22.896 [INFO][5308] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:45:22.909719 containerd[1819]: 2025-05-17 01:45:22.896 [INFO][5308] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="0d6538a18c6d68986875f39200d6baa680bd5625acdd51a4c09b80db9b243db4" HandleID="k8s-pod-network.0d6538a18c6d68986875f39200d6baa680bd5625acdd51a4c09b80db9b243db4" Workload="ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--tw6s9-eth0" May 17 01:45:22.910469 containerd[1819]: 2025-05-17 01:45:22.897 [INFO][5266] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0d6538a18c6d68986875f39200d6baa680bd5625acdd51a4c09b80db9b243db4" Namespace="calico-apiserver" Pod="calico-apiserver-bf55ffd57-tw6s9" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--tw6s9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--tw6s9-eth0", GenerateName:"calico-apiserver-bf55ffd57-", Namespace:"calico-apiserver", SelfLink:"", UID:"1fc7e728-2787-4a01-a1fd-dfaad847d529", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 44, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bf55ffd57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-d569167b40", ContainerID:"", Pod:"calico-apiserver-bf55ffd57-tw6s9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibeef9903bb8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:45:22.910469 containerd[1819]: 2025-05-17 01:45:22.897 [INFO][5266] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="0d6538a18c6d68986875f39200d6baa680bd5625acdd51a4c09b80db9b243db4" Namespace="calico-apiserver" Pod="calico-apiserver-bf55ffd57-tw6s9" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--tw6s9-eth0" May 17 01:45:22.910469 containerd[1819]: 2025-05-17 01:45:22.897 [INFO][5266] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibeef9903bb8 ContainerID="0d6538a18c6d68986875f39200d6baa680bd5625acdd51a4c09b80db9b243db4" Namespace="calico-apiserver" Pod="calico-apiserver-bf55ffd57-tw6s9" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--tw6s9-eth0" May 17 01:45:22.910469 containerd[1819]: 2025-05-17 01:45:22.899 [INFO][5266] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0d6538a18c6d68986875f39200d6baa680bd5625acdd51a4c09b80db9b243db4" Namespace="calico-apiserver" Pod="calico-apiserver-bf55ffd57-tw6s9" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--tw6s9-eth0" May 17 01:45:22.910469 containerd[1819]: 2025-05-17 01:45:22.900 [INFO][5266] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0d6538a18c6d68986875f39200d6baa680bd5625acdd51a4c09b80db9b243db4" Namespace="calico-apiserver" Pod="calico-apiserver-bf55ffd57-tw6s9" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--tw6s9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--tw6s9-eth0", GenerateName:"calico-apiserver-bf55ffd57-", Namespace:"calico-apiserver", SelfLink:"", UID:"1fc7e728-2787-4a01-a1fd-dfaad847d529", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 44, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bf55ffd57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-d569167b40", ContainerID:"0d6538a18c6d68986875f39200d6baa680bd5625acdd51a4c09b80db9b243db4", Pod:"calico-apiserver-bf55ffd57-tw6s9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibeef9903bb8", MAC:"4e:08:d1:bb:01:49", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:45:22.910469 containerd[1819]: 2025-05-17 01:45:22.908 [INFO][5266] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0d6538a18c6d68986875f39200d6baa680bd5625acdd51a4c09b80db9b243db4" Namespace="calico-apiserver" Pod="calico-apiserver-bf55ffd57-tw6s9" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--tw6s9-eth0" May 17 01:45:22.934620 containerd[1819]: time="2025-05-17T01:45:22.934375835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:45:22.934620 containerd[1819]: time="2025-05-17T01:45:22.934584394Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:45:22.934620 containerd[1819]: time="2025-05-17T01:45:22.934593305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:45:22.934733 containerd[1819]: time="2025-05-17T01:45:22.934635497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:45:22.961753 systemd[1]: Started cri-containerd-0d6538a18c6d68986875f39200d6baa680bd5625acdd51a4c09b80db9b243db4.scope - libcontainer container 0d6538a18c6d68986875f39200d6baa680bd5625acdd51a4c09b80db9b243db4. May 17 01:45:23.007976 containerd[1819]: time="2025-05-17T01:45:23.007948647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bf55ffd57-tw6s9,Uid:1fc7e728-2787-4a01-a1fd-dfaad847d529,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"0d6538a18c6d68986875f39200d6baa680bd5625acdd51a4c09b80db9b243db4\"" May 17 01:45:23.008851 containerd[1819]: time="2025-05-17T01:45:23.008834210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 01:45:23.695792 containerd[1819]: time="2025-05-17T01:45:23.695669567Z" level=info msg="StopPodSandbox for \"8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6\"" May 17 01:45:23.746749 containerd[1819]: 2025-05-17 01:45:23.729 [INFO][5537] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6" May 17 01:45:23.746749 containerd[1819]: 2025-05-17 01:45:23.729 [INFO][5537] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6" iface="eth0" netns="/var/run/netns/cni-198fbc59-473b-8660-a580-a09b4dfc040b" May 17 01:45:23.746749 containerd[1819]: 2025-05-17 01:45:23.729 [INFO][5537] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6" iface="eth0" netns="/var/run/netns/cni-198fbc59-473b-8660-a580-a09b4dfc040b" May 17 01:45:23.746749 containerd[1819]: 2025-05-17 01:45:23.729 [INFO][5537] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6" iface="eth0" netns="/var/run/netns/cni-198fbc59-473b-8660-a580-a09b4dfc040b" May 17 01:45:23.746749 containerd[1819]: 2025-05-17 01:45:23.729 [INFO][5537] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6" May 17 01:45:23.746749 containerd[1819]: 2025-05-17 01:45:23.729 [INFO][5537] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6" May 17 01:45:23.746749 containerd[1819]: 2025-05-17 01:45:23.740 [INFO][5553] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6" HandleID="k8s-pod-network.8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6" Workload="ci--4081.3.3--n--d569167b40-k8s-csi--node--driver--k4dvh-eth0" May 17 01:45:23.746749 containerd[1819]: 2025-05-17 01:45:23.740 [INFO][5553] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:45:23.746749 containerd[1819]: 2025-05-17 01:45:23.740 [INFO][5553] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:45:23.746749 containerd[1819]: 2025-05-17 01:45:23.744 [WARNING][5553] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6" HandleID="k8s-pod-network.8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6" Workload="ci--4081.3.3--n--d569167b40-k8s-csi--node--driver--k4dvh-eth0" May 17 01:45:23.746749 containerd[1819]: 2025-05-17 01:45:23.744 [INFO][5553] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6" HandleID="k8s-pod-network.8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6" Workload="ci--4081.3.3--n--d569167b40-k8s-csi--node--driver--k4dvh-eth0" May 17 01:45:23.746749 containerd[1819]: 2025-05-17 01:45:23.745 [INFO][5553] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:45:23.746749 containerd[1819]: 2025-05-17 01:45:23.746 [INFO][5537] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6" May 17 01:45:23.747142 containerd[1819]: time="2025-05-17T01:45:23.746811376Z" level=info msg="TearDown network for sandbox \"8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6\" successfully" May 17 01:45:23.747142 containerd[1819]: time="2025-05-17T01:45:23.746832628Z" level=info msg="StopPodSandbox for \"8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6\" returns successfully" May 17 01:45:23.747258 containerd[1819]: time="2025-05-17T01:45:23.747248182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k4dvh,Uid:64a6b624-b75c-46f6-8f62-c89636ac29be,Namespace:calico-system,Attempt:1,}" May 17 01:45:23.748371 systemd[1]: run-netns-cni\x2d198fbc59\x2d473b\x2d8660\x2da580\x2da09b4dfc040b.mount: Deactivated successfully. May 17 01:45:23.799502 systemd-networkd[1605]: calidf10bc8ef8a: Link UP May 17 01:45:23.799713 systemd-networkd[1605]: calidf10bc8ef8a: Gained carrier May 17 01:45:23.818768 containerd[1819]: 2025-05-17 01:45:23.760 [INFO][5572] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 01:45:23.818768 containerd[1819]: 2025-05-17 01:45:23.766 [INFO][5572] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--d569167b40-k8s-csi--node--driver--k4dvh-eth0 csi-node-driver- calico-system 64a6b624-b75c-46f6-8f62-c89636ac29be 904 0 2025-05-17 01:45:00 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78f6f74485 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.3-n-d569167b40 csi-node-driver-k4dvh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calidf10bc8ef8a [] [] }} ContainerID="a3d834787cabe3721f03faf3384dd614b1d9d662a79c93a6138b5993d6a4e9b4" Namespace="calico-system" Pod="csi-node-driver-k4dvh" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-csi--node--driver--k4dvh-" May 17 01:45:23.818768 containerd[1819]: 2025-05-17 01:45:23.766 [INFO][5572] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a3d834787cabe3721f03faf3384dd614b1d9d662a79c93a6138b5993d6a4e9b4" Namespace="calico-system" Pod="csi-node-driver-k4dvh" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-csi--node--driver--k4dvh-eth0" May 17 01:45:23.818768 containerd[1819]: 2025-05-17 01:45:23.778 [INFO][5595] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a3d834787cabe3721f03faf3384dd614b1d9d662a79c93a6138b5993d6a4e9b4" HandleID="k8s-pod-network.a3d834787cabe3721f03faf3384dd614b1d9d662a79c93a6138b5993d6a4e9b4" Workload="ci--4081.3.3--n--d569167b40-k8s-csi--node--driver--k4dvh-eth0" May 17 01:45:23.818768 containerd[1819]: 2025-05-17 01:45:23.778 [INFO][5595] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a3d834787cabe3721f03faf3384dd614b1d9d662a79c93a6138b5993d6a4e9b4" HandleID="k8s-pod-network.a3d834787cabe3721f03faf3384dd614b1d9d662a79c93a6138b5993d6a4e9b4" Workload="ci--4081.3.3--n--d569167b40-k8s-csi--node--driver--k4dvh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011b510), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-n-d569167b40", "pod":"csi-node-driver-k4dvh", "timestamp":"2025-05-17 01:45:23.778351118 +0000 UTC"}, Hostname:"ci-4081.3.3-n-d569167b40", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 01:45:23.818768 containerd[1819]: 2025-05-17 01:45:23.778 [INFO][5595] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:45:23.818768 containerd[1819]: 2025-05-17 01:45:23.778 [INFO][5595] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:45:23.818768 containerd[1819]: 2025-05-17 01:45:23.778 [INFO][5595] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-d569167b40' May 17 01:45:23.818768 containerd[1819]: 2025-05-17 01:45:23.782 [INFO][5595] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a3d834787cabe3721f03faf3384dd614b1d9d662a79c93a6138b5993d6a4e9b4" host="ci-4081.3.3-n-d569167b40" May 17 01:45:23.818768 containerd[1819]: 2025-05-17 01:45:23.785 [INFO][5595] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-d569167b40" May 17 01:45:23.818768 containerd[1819]: 2025-05-17 01:45:23.787 [INFO][5595] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="ci-4081.3.3-n-d569167b40" May 17 01:45:23.818768 containerd[1819]: 2025-05-17 01:45:23.789 [INFO][5595] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="ci-4081.3.3-n-d569167b40" May 17 01:45:23.818768 containerd[1819]: 2025-05-17 01:45:23.790 [INFO][5595] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="ci-4081.3.3-n-d569167b40" May 17 01:45:23.818768 containerd[1819]: 2025-05-17 01:45:23.790 [INFO][5595] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a3d834787cabe3721f03faf3384dd614b1d9d662a79c93a6138b5993d6a4e9b4" host="ci-4081.3.3-n-d569167b40" May 17 01:45:23.818768 containerd[1819]: 2025-05-17 01:45:23.791 [INFO][5595] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a3d834787cabe3721f03faf3384dd614b1d9d662a79c93a6138b5993d6a4e9b4 May 17 01:45:23.818768 containerd[1819]: 2025-05-17 01:45:23.794 [INFO][5595] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a3d834787cabe3721f03faf3384dd614b1d9d662a79c93a6138b5993d6a4e9b4" host="ci-4081.3.3-n-d569167b40" May 17 01:45:23.818768 containerd[1819]: 2025-05-17 01:45:23.797 [INFO][5595] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.a3d834787cabe3721f03faf3384dd614b1d9d662a79c93a6138b5993d6a4e9b4" host="ci-4081.3.3-n-d569167b40" May 17 01:45:23.818768 containerd[1819]: 2025-05-17 01:45:23.797 [INFO][5595] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.a3d834787cabe3721f03faf3384dd614b1d9d662a79c93a6138b5993d6a4e9b4" host="ci-4081.3.3-n-d569167b40" May 17 01:45:23.818768 containerd[1819]: 2025-05-17 01:45:23.797 [INFO][5595] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:45:23.818768 containerd[1819]: 2025-05-17 01:45:23.797 [INFO][5595] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="a3d834787cabe3721f03faf3384dd614b1d9d662a79c93a6138b5993d6a4e9b4" HandleID="k8s-pod-network.a3d834787cabe3721f03faf3384dd614b1d9d662a79c93a6138b5993d6a4e9b4" Workload="ci--4081.3.3--n--d569167b40-k8s-csi--node--driver--k4dvh-eth0" May 17 01:45:23.819425 containerd[1819]: 2025-05-17 01:45:23.798 [INFO][5572] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a3d834787cabe3721f03faf3384dd614b1d9d662a79c93a6138b5993d6a4e9b4" Namespace="calico-system" Pod="csi-node-driver-k4dvh" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-csi--node--driver--k4dvh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--d569167b40-k8s-csi--node--driver--k4dvh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"64a6b624-b75c-46f6-8f62-c89636ac29be", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 45, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-d569167b40", ContainerID:"", Pod:"csi-node-driver-k4dvh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidf10bc8ef8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:45:23.819425 containerd[1819]: 2025-05-17 01:45:23.798 [INFO][5572] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="a3d834787cabe3721f03faf3384dd614b1d9d662a79c93a6138b5993d6a4e9b4" Namespace="calico-system" Pod="csi-node-driver-k4dvh" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-csi--node--driver--k4dvh-eth0" May 17 01:45:23.819425 containerd[1819]: 2025-05-17 01:45:23.798 [INFO][5572] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidf10bc8ef8a ContainerID="a3d834787cabe3721f03faf3384dd614b1d9d662a79c93a6138b5993d6a4e9b4" Namespace="calico-system" Pod="csi-node-driver-k4dvh" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-csi--node--driver--k4dvh-eth0" May 17 01:45:23.819425 containerd[1819]: 2025-05-17 01:45:23.799 [INFO][5572] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a3d834787cabe3721f03faf3384dd614b1d9d662a79c93a6138b5993d6a4e9b4" Namespace="calico-system" Pod="csi-node-driver-k4dvh" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-csi--node--driver--k4dvh-eth0" May 17 01:45:23.819425 containerd[1819]: 2025-05-17 01:45:23.799 [INFO][5572] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a3d834787cabe3721f03faf3384dd614b1d9d662a79c93a6138b5993d6a4e9b4" Namespace="calico-system" Pod="csi-node-driver-k4dvh" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-csi--node--driver--k4dvh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--d569167b40-k8s-csi--node--driver--k4dvh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"64a6b624-b75c-46f6-8f62-c89636ac29be", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 45, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-d569167b40", ContainerID:"a3d834787cabe3721f03faf3384dd614b1d9d662a79c93a6138b5993d6a4e9b4", Pod:"csi-node-driver-k4dvh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidf10bc8ef8a", MAC:"ca:a6:cb:33:dc:1f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:45:23.819425 containerd[1819]: 2025-05-17 01:45:23.816 [INFO][5572] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a3d834787cabe3721f03faf3384dd614b1d9d662a79c93a6138b5993d6a4e9b4" Namespace="calico-system" Pod="csi-node-driver-k4dvh" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-csi--node--driver--k4dvh-eth0" May 17 01:45:23.822326 kubelet[3068]: I0517 01:45:23.822263 3068 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-574w6" podStartSLOduration=32.822245791 podStartE2EDuration="32.822245791s" podCreationTimestamp="2025-05-17 01:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 01:45:23.822178629 +0000 UTC m=+39.178632306" watchObservedRunningTime="2025-05-17 01:45:23.822245791 +0000 UTC m=+39.178699469" May 17 01:45:23.831357 containerd[1819]: time="2025-05-17T01:45:23.831311750Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:45:23.831357 containerd[1819]: time="2025-05-17T01:45:23.831345257Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:45:23.831357 containerd[1819]: time="2025-05-17T01:45:23.831352332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:45:23.831477 containerd[1819]: time="2025-05-17T01:45:23.831407686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:45:23.851430 systemd[1]: Started cri-containerd-a3d834787cabe3721f03faf3384dd614b1d9d662a79c93a6138b5993d6a4e9b4.scope - libcontainer container a3d834787cabe3721f03faf3384dd614b1d9d662a79c93a6138b5993d6a4e9b4. May 17 01:45:23.861400 containerd[1819]: time="2025-05-17T01:45:23.861349651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k4dvh,Uid:64a6b624-b75c-46f6-8f62-c89636ac29be,Namespace:calico-system,Attempt:1,} returns sandbox id \"a3d834787cabe3721f03faf3384dd614b1d9d662a79c93a6138b5993d6a4e9b4\"" May 17 01:45:24.255565 systemd-networkd[1605]: calibeef9903bb8: Gained IPv6LL May 17 01:45:24.255715 systemd-networkd[1605]: calic15b171384e: Gained IPv6LL May 17 01:45:24.695747 containerd[1819]: time="2025-05-17T01:45:24.695723441Z" level=info msg="StopPodSandbox for \"3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2\"" May 17 01:45:24.695895 containerd[1819]: time="2025-05-17T01:45:24.695880882Z" level=info msg="StopPodSandbox for \"51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c\"" May 17 01:45:24.735863 containerd[1819]: 2025-05-17 01:45:24.717 [INFO][5732] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c" May 17 01:45:24.735863 containerd[1819]: 2025-05-17 01:45:24.717 [INFO][5732] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c" iface="eth0" netns="/var/run/netns/cni-d4a45576-3955-9b0c-91a1-0d99177b5286" May 17 01:45:24.735863 containerd[1819]: 2025-05-17 01:45:24.718 [INFO][5732] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c" iface="eth0" netns="/var/run/netns/cni-d4a45576-3955-9b0c-91a1-0d99177b5286" May 17 01:45:24.735863 containerd[1819]: 2025-05-17 01:45:24.718 [INFO][5732] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c" iface="eth0" netns="/var/run/netns/cni-d4a45576-3955-9b0c-91a1-0d99177b5286" May 17 01:45:24.735863 containerd[1819]: 2025-05-17 01:45:24.718 [INFO][5732] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c" May 17 01:45:24.735863 containerd[1819]: 2025-05-17 01:45:24.718 [INFO][5732] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c" May 17 01:45:24.735863 containerd[1819]: 2025-05-17 01:45:24.729 [INFO][5767] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c" HandleID="k8s-pod-network.51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c" Workload="ci--4081.3.3--n--d569167b40-k8s-calico--kube--controllers--86b6b5cd9b--tjktk-eth0" May 17 01:45:24.735863 containerd[1819]: 2025-05-17 01:45:24.729 [INFO][5767] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:45:24.735863 containerd[1819]: 2025-05-17 01:45:24.729 [INFO][5767] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:45:24.735863 containerd[1819]: 2025-05-17 01:45:24.732 [WARNING][5767] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c" HandleID="k8s-pod-network.51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c" Workload="ci--4081.3.3--n--d569167b40-k8s-calico--kube--controllers--86b6b5cd9b--tjktk-eth0" May 17 01:45:24.735863 containerd[1819]: 2025-05-17 01:45:24.733 [INFO][5767] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c" HandleID="k8s-pod-network.51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c" Workload="ci--4081.3.3--n--d569167b40-k8s-calico--kube--controllers--86b6b5cd9b--tjktk-eth0" May 17 01:45:24.735863 containerd[1819]: 2025-05-17 01:45:24.734 [INFO][5767] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:45:24.735863 containerd[1819]: 2025-05-17 01:45:24.735 [INFO][5732] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c" May 17 01:45:24.736166 containerd[1819]: time="2025-05-17T01:45:24.735957120Z" level=info msg="TearDown network for sandbox \"51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c\" successfully" May 17 01:45:24.736166 containerd[1819]: time="2025-05-17T01:45:24.735981666Z" level=info msg="StopPodSandbox for \"51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c\" returns successfully" May 17 01:45:24.736375 containerd[1819]: time="2025-05-17T01:45:24.736360932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86b6b5cd9b-tjktk,Uid:b0cc1622-d561-4a8d-9dde-c3b01983d270,Namespace:calico-system,Attempt:1,}" May 17 01:45:24.740910 containerd[1819]: 2025-05-17 01:45:24.718 [INFO][5733] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2" May 17 01:45:24.740910 containerd[1819]: 2025-05-17 01:45:24.718 [INFO][5733] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2" iface="eth0" netns="/var/run/netns/cni-150476c8-d56c-9c04-89db-3a6f060dc495" May 17 01:45:24.740910 containerd[1819]: 2025-05-17 01:45:24.718 [INFO][5733] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2" iface="eth0" netns="/var/run/netns/cni-150476c8-d56c-9c04-89db-3a6f060dc495" May 17 01:45:24.740910 containerd[1819]: 2025-05-17 01:45:24.718 [INFO][5733] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2" iface="eth0" netns="/var/run/netns/cni-150476c8-d56c-9c04-89db-3a6f060dc495" May 17 01:45:24.740910 containerd[1819]: 2025-05-17 01:45:24.718 [INFO][5733] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2" May 17 01:45:24.740910 containerd[1819]: 2025-05-17 01:45:24.718 [INFO][5733] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2" May 17 01:45:24.740910 containerd[1819]: 2025-05-17 01:45:24.729 [INFO][5769] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2" HandleID="k8s-pod-network.3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2" Workload="ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--p8kz8-eth0" May 17 01:45:24.740910 containerd[1819]: 2025-05-17 01:45:24.729 [INFO][5769] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:45:24.740910 containerd[1819]: 2025-05-17 01:45:24.734 [INFO][5769] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:45:24.740910 containerd[1819]: 2025-05-17 01:45:24.738 [WARNING][5769] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2" HandleID="k8s-pod-network.3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2" Workload="ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--p8kz8-eth0" May 17 01:45:24.740910 containerd[1819]: 2025-05-17 01:45:24.738 [INFO][5769] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2" HandleID="k8s-pod-network.3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2" Workload="ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--p8kz8-eth0" May 17 01:45:24.740910 containerd[1819]: 2025-05-17 01:45:24.739 [INFO][5769] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:45:24.740910 containerd[1819]: 2025-05-17 01:45:24.739 [INFO][5733] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2" May 17 01:45:24.741191 containerd[1819]: time="2025-05-17T01:45:24.741007476Z" level=info msg="TearDown network for sandbox \"3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2\" successfully" May 17 01:45:24.741191 containerd[1819]: time="2025-05-17T01:45:24.741020895Z" level=info msg="StopPodSandbox for \"3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2\" returns successfully" May 17 01:45:24.741352 containerd[1819]: time="2025-05-17T01:45:24.741337257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p8kz8,Uid:3af411a3-09ba-4381-bce5-19753bf3d671,Namespace:kube-system,Attempt:1,}" May 17 01:45:24.743041 systemd[1]: run-netns-cni\x2dd4a45576\x2d3955\x2d9b0c\x2d91a1\x2d0d99177b5286.mount: Deactivated successfully. May 17 01:45:24.745101 systemd[1]: run-netns-cni\x2d150476c8\x2dd56c\x2d9c04\x2d89db\x2d3a6f060dc495.mount: Deactivated successfully. May 17 01:45:24.797196 systemd-networkd[1605]: calif640e379973: Link UP May 17 01:45:24.797393 systemd-networkd[1605]: calif640e379973: Gained carrier May 17 01:45:24.804112 containerd[1819]: 2025-05-17 01:45:24.752 [INFO][5797] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 01:45:24.804112 containerd[1819]: 2025-05-17 01:45:24.760 [INFO][5797] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--d569167b40-k8s-calico--kube--controllers--86b6b5cd9b--tjktk-eth0 calico-kube-controllers-86b6b5cd9b- calico-system b0cc1622-d561-4a8d-9dde-c3b01983d270 921 0 2025-05-17 01:45:00 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:86b6b5cd9b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.3-n-d569167b40 calico-kube-controllers-86b6b5cd9b-tjktk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif640e379973 [] [] }} ContainerID="8255d69060339b67bba97c13ec8ebbd003182fc2cbdd5439f412eac85fafa625" Namespace="calico-system" Pod="calico-kube-controllers-86b6b5cd9b-tjktk" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-calico--kube--controllers--86b6b5cd9b--tjktk-" May 17 01:45:24.804112 containerd[1819]: 2025-05-17 01:45:24.760 [INFO][5797] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8255d69060339b67bba97c13ec8ebbd003182fc2cbdd5439f412eac85fafa625" Namespace="calico-system" Pod="calico-kube-controllers-86b6b5cd9b-tjktk" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-calico--kube--controllers--86b6b5cd9b--tjktk-eth0" May 17 01:45:24.804112 containerd[1819]: 2025-05-17 01:45:24.773 [INFO][5842] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8255d69060339b67bba97c13ec8ebbd003182fc2cbdd5439f412eac85fafa625" HandleID="k8s-pod-network.8255d69060339b67bba97c13ec8ebbd003182fc2cbdd5439f412eac85fafa625" Workload="ci--4081.3.3--n--d569167b40-k8s-calico--kube--controllers--86b6b5cd9b--tjktk-eth0" May 17 01:45:24.804112 containerd[1819]: 2025-05-17 01:45:24.773 [INFO][5842] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8255d69060339b67bba97c13ec8ebbd003182fc2cbdd5439f412eac85fafa625" HandleID="k8s-pod-network.8255d69060339b67bba97c13ec8ebbd003182fc2cbdd5439f412eac85fafa625" Workload="ci--4081.3.3--n--d569167b40-k8s-calico--kube--controllers--86b6b5cd9b--tjktk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f6d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-n-d569167b40", "pod":"calico-kube-controllers-86b6b5cd9b-tjktk", "timestamp":"2025-05-17 01:45:24.773111999 +0000 UTC"}, Hostname:"ci-4081.3.3-n-d569167b40", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 01:45:24.804112 containerd[1819]: 2025-05-17 01:45:24.773 [INFO][5842] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:45:24.804112 containerd[1819]: 2025-05-17 01:45:24.773 [INFO][5842] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:45:24.804112 containerd[1819]: 2025-05-17 01:45:24.773 [INFO][5842] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-d569167b40' May 17 01:45:24.804112 containerd[1819]: 2025-05-17 01:45:24.777 [INFO][5842] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8255d69060339b67bba97c13ec8ebbd003182fc2cbdd5439f412eac85fafa625" host="ci-4081.3.3-n-d569167b40" May 17 01:45:24.804112 containerd[1819]: 2025-05-17 01:45:24.781 [INFO][5842] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-d569167b40" May 17 01:45:24.804112 containerd[1819]: 2025-05-17 01:45:24.784 [INFO][5842] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="ci-4081.3.3-n-d569167b40" May 17 01:45:24.804112 containerd[1819]: 2025-05-17 01:45:24.786 [INFO][5842] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="ci-4081.3.3-n-d569167b40" May 17 01:45:24.804112 containerd[1819]: 2025-05-17 01:45:24.787 [INFO][5842] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="ci-4081.3.3-n-d569167b40" May 17 01:45:24.804112 containerd[1819]: 2025-05-17 01:45:24.787 [INFO][5842] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8255d69060339b67bba97c13ec8ebbd003182fc2cbdd5439f412eac85fafa625" host="ci-4081.3.3-n-d569167b40" May 17 01:45:24.804112 containerd[1819]: 2025-05-17 01:45:24.789 [INFO][5842] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8255d69060339b67bba97c13ec8ebbd003182fc2cbdd5439f412eac85fafa625 May 17 01:45:24.804112 containerd[1819]: 2025-05-17 01:45:24.791 [INFO][5842] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8255d69060339b67bba97c13ec8ebbd003182fc2cbdd5439f412eac85fafa625" host="ci-4081.3.3-n-d569167b40" May 17 01:45:24.804112 containerd[1819]: 2025-05-17 01:45:24.795 [INFO][5842] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.8255d69060339b67bba97c13ec8ebbd003182fc2cbdd5439f412eac85fafa625" host="ci-4081.3.3-n-d569167b40" May 17 01:45:24.804112 containerd[1819]: 2025-05-17 01:45:24.795 [INFO][5842] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.8255d69060339b67bba97c13ec8ebbd003182fc2cbdd5439f412eac85fafa625" host="ci-4081.3.3-n-d569167b40" May 17 01:45:24.804112 containerd[1819]: 2025-05-17 01:45:24.795 [INFO][5842] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:45:24.804112 containerd[1819]: 2025-05-17 01:45:24.795 [INFO][5842] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="8255d69060339b67bba97c13ec8ebbd003182fc2cbdd5439f412eac85fafa625" HandleID="k8s-pod-network.8255d69060339b67bba97c13ec8ebbd003182fc2cbdd5439f412eac85fafa625" Workload="ci--4081.3.3--n--d569167b40-k8s-calico--kube--controllers--86b6b5cd9b--tjktk-eth0" May 17 01:45:24.804833 containerd[1819]: 2025-05-17 01:45:24.796 [INFO][5797] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8255d69060339b67bba97c13ec8ebbd003182fc2cbdd5439f412eac85fafa625" Namespace="calico-system" Pod="calico-kube-controllers-86b6b5cd9b-tjktk" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-calico--kube--controllers--86b6b5cd9b--tjktk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--d569167b40-k8s-calico--kube--controllers--86b6b5cd9b--tjktk-eth0", GenerateName:"calico-kube-controllers-86b6b5cd9b-", Namespace:"calico-system", SelfLink:"", UID:"b0cc1622-d561-4a8d-9dde-c3b01983d270", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 45, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86b6b5cd9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-d569167b40", ContainerID:"", Pod:"calico-kube-controllers-86b6b5cd9b-tjktk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif640e379973", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:45:24.804833 containerd[1819]: 2025-05-17 01:45:24.796 [INFO][5797] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="8255d69060339b67bba97c13ec8ebbd003182fc2cbdd5439f412eac85fafa625" Namespace="calico-system" Pod="calico-kube-controllers-86b6b5cd9b-tjktk" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-calico--kube--controllers--86b6b5cd9b--tjktk-eth0" May 17 01:45:24.804833 containerd[1819]: 2025-05-17 01:45:24.796 [INFO][5797] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif640e379973 ContainerID="8255d69060339b67bba97c13ec8ebbd003182fc2cbdd5439f412eac85fafa625" Namespace="calico-system" Pod="calico-kube-controllers-86b6b5cd9b-tjktk" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-calico--kube--controllers--86b6b5cd9b--tjktk-eth0" May 17 01:45:24.804833 containerd[1819]: 2025-05-17 01:45:24.797 [INFO][5797] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8255d69060339b67bba97c13ec8ebbd003182fc2cbdd5439f412eac85fafa625" Namespace="calico-system" Pod="calico-kube-controllers-86b6b5cd9b-tjktk" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-calico--kube--controllers--86b6b5cd9b--tjktk-eth0" May 17 01:45:24.804833 containerd[1819]: 2025-05-17 01:45:24.797 [INFO][5797] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8255d69060339b67bba97c13ec8ebbd003182fc2cbdd5439f412eac85fafa625" Namespace="calico-system" Pod="calico-kube-controllers-86b6b5cd9b-tjktk" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-calico--kube--controllers--86b6b5cd9b--tjktk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--d569167b40-k8s-calico--kube--controllers--86b6b5cd9b--tjktk-eth0", GenerateName:"calico-kube-controllers-86b6b5cd9b-", Namespace:"calico-system", SelfLink:"", UID:"b0cc1622-d561-4a8d-9dde-c3b01983d270", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 45, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86b6b5cd9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-d569167b40", ContainerID:"8255d69060339b67bba97c13ec8ebbd003182fc2cbdd5439f412eac85fafa625", Pod:"calico-kube-controllers-86b6b5cd9b-tjktk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif640e379973", MAC:"b6:ee:be:71:62:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:45:24.804833 containerd[1819]: 2025-05-17 01:45:24.802 [INFO][5797] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8255d69060339b67bba97c13ec8ebbd003182fc2cbdd5439f412eac85fafa625" Namespace="calico-system" Pod="calico-kube-controllers-86b6b5cd9b-tjktk" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-calico--kube--controllers--86b6b5cd9b--tjktk-eth0" May 17 01:45:24.812510 containerd[1819]: time="2025-05-17T01:45:24.812439276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:45:24.812510 containerd[1819]: time="2025-05-17T01:45:24.812467769Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:45:24.812510 containerd[1819]: time="2025-05-17T01:45:24.812474975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:45:24.812617 containerd[1819]: time="2025-05-17T01:45:24.812514486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:45:24.834531 systemd[1]: Started cri-containerd-8255d69060339b67bba97c13ec8ebbd003182fc2cbdd5439f412eac85fafa625.scope - libcontainer container 8255d69060339b67bba97c13ec8ebbd003182fc2cbdd5439f412eac85fafa625. May 17 01:45:24.863678 containerd[1819]: time="2025-05-17T01:45:24.863623436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86b6b5cd9b-tjktk,Uid:b0cc1622-d561-4a8d-9dde-c3b01983d270,Namespace:calico-system,Attempt:1,} returns sandbox id \"8255d69060339b67bba97c13ec8ebbd003182fc2cbdd5439f412eac85fafa625\"" May 17 01:45:24.900093 systemd-networkd[1605]: calif115d9071ec: Link UP May 17 01:45:24.900417 systemd-networkd[1605]: calif115d9071ec: Gained carrier May 17 01:45:24.909190 containerd[1819]: 2025-05-17 01:45:24.756 [INFO][5811] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 01:45:24.909190 containerd[1819]: 2025-05-17 01:45:24.762 [INFO][5811] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--p8kz8-eth0 coredns-668d6bf9bc- kube-system 3af411a3-09ba-4381-bce5-19753bf3d671 922 0 2025-05-17 01:44:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.3-n-d569167b40 coredns-668d6bf9bc-p8kz8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif115d9071ec [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="cc5802903811a884a4c0b7b8b6a037e83bd511c8bcbb9b1faf400eb6326d5fa2" Namespace="kube-system" Pod="coredns-668d6bf9bc-p8kz8" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--p8kz8-" May 17 01:45:24.909190 containerd[1819]: 2025-05-17 01:45:24.762 [INFO][5811] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cc5802903811a884a4c0b7b8b6a037e83bd511c8bcbb9b1faf400eb6326d5fa2" Namespace="kube-system" Pod="coredns-668d6bf9bc-p8kz8" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--p8kz8-eth0" May 17 01:45:24.909190 containerd[1819]: 2025-05-17 01:45:24.775 [INFO][5847] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc5802903811a884a4c0b7b8b6a037e83bd511c8bcbb9b1faf400eb6326d5fa2" HandleID="k8s-pod-network.cc5802903811a884a4c0b7b8b6a037e83bd511c8bcbb9b1faf400eb6326d5fa2" Workload="ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--p8kz8-eth0" May 17 01:45:24.909190 containerd[1819]: 2025-05-17 01:45:24.775 [INFO][5847] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cc5802903811a884a4c0b7b8b6a037e83bd511c8bcbb9b1faf400eb6326d5fa2" HandleID="k8s-pod-network.cc5802903811a884a4c0b7b8b6a037e83bd511c8bcbb9b1faf400eb6326d5fa2" Workload="ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--p8kz8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000789820), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.3-n-d569167b40", "pod":"coredns-668d6bf9bc-p8kz8", "timestamp":"2025-05-17 01:45:24.775415796 +0000 UTC"}, Hostname:"ci-4081.3.3-n-d569167b40", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 01:45:24.909190 containerd[1819]: 2025-05-17 01:45:24.775 [INFO][5847] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:45:24.909190 containerd[1819]: 2025-05-17 01:45:24.795 [INFO][5847] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:45:24.909190 containerd[1819]: 2025-05-17 01:45:24.795 [INFO][5847] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-d569167b40' May 17 01:45:24.909190 containerd[1819]: 2025-05-17 01:45:24.878 [INFO][5847] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cc5802903811a884a4c0b7b8b6a037e83bd511c8bcbb9b1faf400eb6326d5fa2" host="ci-4081.3.3-n-d569167b40" May 17 01:45:24.909190 containerd[1819]: 2025-05-17 01:45:24.882 [INFO][5847] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-d569167b40" May 17 01:45:24.909190 containerd[1819]: 2025-05-17 01:45:24.885 [INFO][5847] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="ci-4081.3.3-n-d569167b40" May 17 01:45:24.909190 containerd[1819]: 2025-05-17 01:45:24.887 [INFO][5847] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="ci-4081.3.3-n-d569167b40" May 17 01:45:24.909190 containerd[1819]: 2025-05-17 01:45:24.889 [INFO][5847] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="ci-4081.3.3-n-d569167b40" May 17 01:45:24.909190 containerd[1819]: 2025-05-17 01:45:24.889 [INFO][5847] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cc5802903811a884a4c0b7b8b6a037e83bd511c8bcbb9b1faf400eb6326d5fa2" host="ci-4081.3.3-n-d569167b40" May 17 01:45:24.909190 containerd[1819]: 2025-05-17 01:45:24.890 [INFO][5847] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cc5802903811a884a4c0b7b8b6a037e83bd511c8bcbb9b1faf400eb6326d5fa2 May 17 01:45:24.909190 containerd[1819]: 2025-05-17 01:45:24.893 [INFO][5847] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cc5802903811a884a4c0b7b8b6a037e83bd511c8bcbb9b1faf400eb6326d5fa2" host="ci-4081.3.3-n-d569167b40" May 17 01:45:24.909190 containerd[1819]: 2025-05-17 01:45:24.897 [INFO][5847] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.cc5802903811a884a4c0b7b8b6a037e83bd511c8bcbb9b1faf400eb6326d5fa2" host="ci-4081.3.3-n-d569167b40" May 17 01:45:24.909190 containerd[1819]: 2025-05-17 01:45:24.897 [INFO][5847] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.cc5802903811a884a4c0b7b8b6a037e83bd511c8bcbb9b1faf400eb6326d5fa2" host="ci-4081.3.3-n-d569167b40" May 17 01:45:24.909190 containerd[1819]: 2025-05-17 01:45:24.897 [INFO][5847] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:45:24.909190 containerd[1819]: 2025-05-17 01:45:24.897 [INFO][5847] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="cc5802903811a884a4c0b7b8b6a037e83bd511c8bcbb9b1faf400eb6326d5fa2" HandleID="k8s-pod-network.cc5802903811a884a4c0b7b8b6a037e83bd511c8bcbb9b1faf400eb6326d5fa2" Workload="ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--p8kz8-eth0" May 17 01:45:24.909946 containerd[1819]: 2025-05-17 01:45:24.898 [INFO][5811] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cc5802903811a884a4c0b7b8b6a037e83bd511c8bcbb9b1faf400eb6326d5fa2" Namespace="kube-system" Pod="coredns-668d6bf9bc-p8kz8" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--p8kz8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--p8kz8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3af411a3-09ba-4381-bce5-19753bf3d671", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 44, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-d569167b40", ContainerID:"", Pod:"coredns-668d6bf9bc-p8kz8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif115d9071ec", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:45:24.909946 containerd[1819]: 2025-05-17 01:45:24.898 [INFO][5811] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="cc5802903811a884a4c0b7b8b6a037e83bd511c8bcbb9b1faf400eb6326d5fa2" Namespace="kube-system" Pod="coredns-668d6bf9bc-p8kz8" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--p8kz8-eth0" May 17 01:45:24.909946 containerd[1819]: 2025-05-17 01:45:24.898 [INFO][5811] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif115d9071ec ContainerID="cc5802903811a884a4c0b7b8b6a037e83bd511c8bcbb9b1faf400eb6326d5fa2" Namespace="kube-system" Pod="coredns-668d6bf9bc-p8kz8" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--p8kz8-eth0" May 17 01:45:24.909946 containerd[1819]: 2025-05-17 01:45:24.900 [INFO][5811] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cc5802903811a884a4c0b7b8b6a037e83bd511c8bcbb9b1faf400eb6326d5fa2" Namespace="kube-system" Pod="coredns-668d6bf9bc-p8kz8" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--p8kz8-eth0" May 17 01:45:24.909946 containerd[1819]: 2025-05-17 01:45:24.900 [INFO][5811] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cc5802903811a884a4c0b7b8b6a037e83bd511c8bcbb9b1faf400eb6326d5fa2" Namespace="kube-system" Pod="coredns-668d6bf9bc-p8kz8" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--p8kz8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--p8kz8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3af411a3-09ba-4381-bce5-19753bf3d671", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 44, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-d569167b40", ContainerID:"cc5802903811a884a4c0b7b8b6a037e83bd511c8bcbb9b1faf400eb6326d5fa2", Pod:"coredns-668d6bf9bc-p8kz8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif115d9071ec", MAC:"9a:32:5b:fe:7b:db", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:45:24.909946 containerd[1819]: 2025-05-17 01:45:24.907 [INFO][5811] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cc5802903811a884a4c0b7b8b6a037e83bd511c8bcbb9b1faf400eb6326d5fa2" Namespace="kube-system" Pod="coredns-668d6bf9bc-p8kz8" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--p8kz8-eth0" May 17 01:45:24.918649 containerd[1819]: time="2025-05-17T01:45:24.918580273Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:45:24.918649 containerd[1819]: time="2025-05-17T01:45:24.918610047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:45:24.918649 containerd[1819]: time="2025-05-17T01:45:24.918617535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:45:24.918751 containerd[1819]: time="2025-05-17T01:45:24.918657423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:45:24.939475 systemd[1]: Started cri-containerd-cc5802903811a884a4c0b7b8b6a037e83bd511c8bcbb9b1faf400eb6326d5fa2.scope - libcontainer container cc5802903811a884a4c0b7b8b6a037e83bd511c8bcbb9b1faf400eb6326d5fa2. May 17 01:45:24.965251 containerd[1819]: time="2025-05-17T01:45:24.965228228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p8kz8,Uid:3af411a3-09ba-4381-bce5-19753bf3d671,Namespace:kube-system,Attempt:1,} returns sandbox id \"cc5802903811a884a4c0b7b8b6a037e83bd511c8bcbb9b1faf400eb6326d5fa2\"" May 17 01:45:24.966630 containerd[1819]: time="2025-05-17T01:45:24.966577226Z" level=info msg="CreateContainer within sandbox \"cc5802903811a884a4c0b7b8b6a037e83bd511c8bcbb9b1faf400eb6326d5fa2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 01:45:24.971496 containerd[1819]: time="2025-05-17T01:45:24.971461157Z" level=info msg="CreateContainer within sandbox \"cc5802903811a884a4c0b7b8b6a037e83bd511c8bcbb9b1faf400eb6326d5fa2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"46f1725cef323efc96559a1ce429254c6040a8e0b5bfa42db50aebc024108771\"" May 17 01:45:24.971659 containerd[1819]: time="2025-05-17T01:45:24.971614328Z" level=info msg="StartContainer for \"46f1725cef323efc96559a1ce429254c6040a8e0b5bfa42db50aebc024108771\"" May 17 01:45:25.001429 systemd[1]: Started cri-containerd-46f1725cef323efc96559a1ce429254c6040a8e0b5bfa42db50aebc024108771.scope - libcontainer container 46f1725cef323efc96559a1ce429254c6040a8e0b5bfa42db50aebc024108771. May 17 01:45:25.012402 containerd[1819]: time="2025-05-17T01:45:25.012382199Z" level=info msg="StartContainer for \"46f1725cef323efc96559a1ce429254c6040a8e0b5bfa42db50aebc024108771\" returns successfully" May 17 01:45:25.343565 systemd-networkd[1605]: calidf10bc8ef8a: Gained IPv6LL May 17 01:45:25.696070 containerd[1819]: time="2025-05-17T01:45:25.695974008Z" level=info msg="StopPodSandbox for \"df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61\"" May 17 01:45:25.696070 containerd[1819]: time="2025-05-17T01:45:25.695986677Z" level=info msg="StopPodSandbox for \"ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f\"" May 17 01:45:25.734180 containerd[1819]: 2025-05-17 01:45:25.717 [INFO][6098] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61" May 17 01:45:25.734180 containerd[1819]: 2025-05-17 01:45:25.718 [INFO][6098] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61" iface="eth0" netns="/var/run/netns/cni-06b689f7-09b5-5a17-8ee4-e735a62af608" May 17 01:45:25.734180 containerd[1819]: 2025-05-17 01:45:25.718 [INFO][6098] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61" iface="eth0" netns="/var/run/netns/cni-06b689f7-09b5-5a17-8ee4-e735a62af608" May 17 01:45:25.734180 containerd[1819]: 2025-05-17 01:45:25.718 [INFO][6098] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61" iface="eth0" netns="/var/run/netns/cni-06b689f7-09b5-5a17-8ee4-e735a62af608" May 17 01:45:25.734180 containerd[1819]: 2025-05-17 01:45:25.718 [INFO][6098] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61" May 17 01:45:25.734180 containerd[1819]: 2025-05-17 01:45:25.718 [INFO][6098] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61" May 17 01:45:25.734180 containerd[1819]: 2025-05-17 01:45:25.727 [INFO][6130] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61" HandleID="k8s-pod-network.df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61" Workload="ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--6x6sv-eth0" May 17 01:45:25.734180 containerd[1819]: 2025-05-17 01:45:25.727 [INFO][6130] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:45:25.734180 containerd[1819]: 2025-05-17 01:45:25.727 [INFO][6130] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:45:25.734180 containerd[1819]: 2025-05-17 01:45:25.731 [WARNING][6130] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61" HandleID="k8s-pod-network.df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61" Workload="ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--6x6sv-eth0" May 17 01:45:25.734180 containerd[1819]: 2025-05-17 01:45:25.731 [INFO][6130] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61" HandleID="k8s-pod-network.df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61" Workload="ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--6x6sv-eth0" May 17 01:45:25.734180 containerd[1819]: 2025-05-17 01:45:25.732 [INFO][6130] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:45:25.734180 containerd[1819]: 2025-05-17 01:45:25.733 [INFO][6098] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61" May 17 01:45:25.734493 containerd[1819]: time="2025-05-17T01:45:25.734258064Z" level=info msg="TearDown network for sandbox \"df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61\" successfully" May 17 01:45:25.734493 containerd[1819]: time="2025-05-17T01:45:25.734282117Z" level=info msg="StopPodSandbox for \"df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61\" returns successfully" May 17 01:45:25.734689 containerd[1819]: time="2025-05-17T01:45:25.734676337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bf55ffd57-6x6sv,Uid:a83cb314-94c7-48be-9000-43244ee2be0f,Namespace:calico-apiserver,Attempt:1,}" May 17 01:45:25.739688 containerd[1819]: 2025-05-17 01:45:25.717 [INFO][6097] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f" May 17 01:45:25.739688 containerd[1819]: 2025-05-17 01:45:25.717 [INFO][6097] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f" iface="eth0" netns="/var/run/netns/cni-33a17638-7de7-7f19-0725-fc5af5f8c99e" May 17 01:45:25.739688 containerd[1819]: 2025-05-17 01:45:25.717 [INFO][6097] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f" iface="eth0" netns="/var/run/netns/cni-33a17638-7de7-7f19-0725-fc5af5f8c99e" May 17 01:45:25.739688 containerd[1819]: 2025-05-17 01:45:25.717 [INFO][6097] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f" iface="eth0" netns="/var/run/netns/cni-33a17638-7de7-7f19-0725-fc5af5f8c99e" May 17 01:45:25.739688 containerd[1819]: 2025-05-17 01:45:25.717 [INFO][6097] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f" May 17 01:45:25.739688 containerd[1819]: 2025-05-17 01:45:25.717 [INFO][6097] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f" May 17 01:45:25.739688 containerd[1819]: 2025-05-17 01:45:25.728 [INFO][6128] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f" HandleID="k8s-pod-network.ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f" Workload="ci--4081.3.3--n--d569167b40-k8s-goldmane--78d55f7ddc--zwf9c-eth0" May 17 01:45:25.739688 containerd[1819]: 2025-05-17 01:45:25.728 [INFO][6128] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:45:25.739688 containerd[1819]: 2025-05-17 01:45:25.732 [INFO][6128] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:45:25.739688 containerd[1819]: 2025-05-17 01:45:25.737 [WARNING][6128] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f" HandleID="k8s-pod-network.ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f" Workload="ci--4081.3.3--n--d569167b40-k8s-goldmane--78d55f7ddc--zwf9c-eth0" May 17 01:45:25.739688 containerd[1819]: 2025-05-17 01:45:25.737 [INFO][6128] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f" HandleID="k8s-pod-network.ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f" Workload="ci--4081.3.3--n--d569167b40-k8s-goldmane--78d55f7ddc--zwf9c-eth0" May 17 01:45:25.739688 containerd[1819]: 2025-05-17 01:45:25.738 [INFO][6128] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:45:25.739688 containerd[1819]: 2025-05-17 01:45:25.738 [INFO][6097] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f" May 17 01:45:25.740033 containerd[1819]: time="2025-05-17T01:45:25.739753014Z" level=info msg="TearDown network for sandbox \"ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f\" successfully" May 17 01:45:25.740033 containerd[1819]: time="2025-05-17T01:45:25.739769810Z" level=info msg="StopPodSandbox for \"ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f\" returns successfully" May 17 01:45:25.740200 containerd[1819]: time="2025-05-17T01:45:25.740188069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-zwf9c,Uid:4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8,Namespace:calico-system,Attempt:1,}" May 17 01:45:25.744144 systemd[1]: run-netns-cni\x2d33a17638\x2d7de7\x2d7f19\x2d0725\x2dfc5af5f8c99e.mount: Deactivated successfully. May 17 01:45:25.744218 systemd[1]: run-netns-cni\x2d06b689f7\x2d09b5\x2d5a17\x2d8ee4\x2de735a62af608.mount: Deactivated successfully. May 17 01:45:25.792943 systemd-networkd[1605]: calif93cbfd5167: Link UP May 17 01:45:25.793123 systemd-networkd[1605]: calif93cbfd5167: Gained carrier May 17 01:45:25.799490 containerd[1819]: 2025-05-17 01:45:25.747 [INFO][6162] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 01:45:25.799490 containerd[1819]: 2025-05-17 01:45:25.754 [INFO][6162] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--6x6sv-eth0 calico-apiserver-bf55ffd57- calico-apiserver a83cb314-94c7-48be-9000-43244ee2be0f 938 0 2025-05-17 01:44:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:bf55ffd57 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-n-d569167b40 calico-apiserver-bf55ffd57-6x6sv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif93cbfd5167 [] [] }} ContainerID="01158141b50e961138d122298ce8ef138387a57d451a941ee2466e2d732f2bae" Namespace="calico-apiserver" Pod="calico-apiserver-bf55ffd57-6x6sv" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--6x6sv-" May 17 01:45:25.799490 containerd[1819]: 2025-05-17 01:45:25.754 [INFO][6162] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="01158141b50e961138d122298ce8ef138387a57d451a941ee2466e2d732f2bae" Namespace="calico-apiserver" Pod="calico-apiserver-bf55ffd57-6x6sv" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--6x6sv-eth0" May 17 01:45:25.799490 containerd[1819]: 2025-05-17 01:45:25.767 [INFO][6207] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="01158141b50e961138d122298ce8ef138387a57d451a941ee2466e2d732f2bae" HandleID="k8s-pod-network.01158141b50e961138d122298ce8ef138387a57d451a941ee2466e2d732f2bae" Workload="ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--6x6sv-eth0" May 17 01:45:25.799490 containerd[1819]: 2025-05-17 01:45:25.767 [INFO][6207] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="01158141b50e961138d122298ce8ef138387a57d451a941ee2466e2d732f2bae" HandleID="k8s-pod-network.01158141b50e961138d122298ce8ef138387a57d451a941ee2466e2d732f2bae" Workload="ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--6x6sv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f660), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-n-d569167b40", "pod":"calico-apiserver-bf55ffd57-6x6sv", "timestamp":"2025-05-17 01:45:25.767626833 +0000 UTC"}, Hostname:"ci-4081.3.3-n-d569167b40", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 01:45:25.799490 containerd[1819]: 2025-05-17 01:45:25.767 [INFO][6207] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:45:25.799490 containerd[1819]: 2025-05-17 01:45:25.767 [INFO][6207] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:45:25.799490 containerd[1819]: 2025-05-17 01:45:25.767 [INFO][6207] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-d569167b40' May 17 01:45:25.799490 containerd[1819]: 2025-05-17 01:45:25.772 [INFO][6207] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.01158141b50e961138d122298ce8ef138387a57d451a941ee2466e2d732f2bae" host="ci-4081.3.3-n-d569167b40" May 17 01:45:25.799490 containerd[1819]: 2025-05-17 01:45:25.774 [INFO][6207] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-d569167b40" May 17 01:45:25.799490 containerd[1819]: 2025-05-17 01:45:25.776 [INFO][6207] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="ci-4081.3.3-n-d569167b40" May 17 01:45:25.799490 containerd[1819]: 2025-05-17 01:45:25.777 [INFO][6207] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="ci-4081.3.3-n-d569167b40" May 17 01:45:25.799490 containerd[1819]: 2025-05-17 01:45:25.778 [INFO][6207] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="ci-4081.3.3-n-d569167b40" May 17 01:45:25.799490 containerd[1819]: 2025-05-17 01:45:25.778 [INFO][6207] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.01158141b50e961138d122298ce8ef138387a57d451a941ee2466e2d732f2bae" host="ci-4081.3.3-n-d569167b40" May 17 01:45:25.799490 containerd[1819]: 2025-05-17 01:45:25.779 [INFO][6207] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.01158141b50e961138d122298ce8ef138387a57d451a941ee2466e2d732f2bae May 17 01:45:25.799490 containerd[1819]: 2025-05-17 01:45:25.781 [INFO][6207] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.01158141b50e961138d122298ce8ef138387a57d451a941ee2466e2d732f2bae" host="ci-4081.3.3-n-d569167b40" May 17 01:45:25.799490 containerd[1819]: 2025-05-17 01:45:25.788 [INFO][6207] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.01158141b50e961138d122298ce8ef138387a57d451a941ee2466e2d732f2bae" host="ci-4081.3.3-n-d569167b40" May 17 01:45:25.799490 containerd[1819]: 2025-05-17 01:45:25.788 [INFO][6207] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.01158141b50e961138d122298ce8ef138387a57d451a941ee2466e2d732f2bae" host="ci-4081.3.3-n-d569167b40" May 17 01:45:25.799490 containerd[1819]: 2025-05-17 01:45:25.788 [INFO][6207] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:45:25.799490 containerd[1819]: 2025-05-17 01:45:25.788 [INFO][6207] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="01158141b50e961138d122298ce8ef138387a57d451a941ee2466e2d732f2bae" HandleID="k8s-pod-network.01158141b50e961138d122298ce8ef138387a57d451a941ee2466e2d732f2bae" Workload="ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--6x6sv-eth0" May 17 01:45:25.799911 containerd[1819]: 2025-05-17 01:45:25.791 [INFO][6162] cni-plugin/k8s.go 418: Populated endpoint ContainerID="01158141b50e961138d122298ce8ef138387a57d451a941ee2466e2d732f2bae" Namespace="calico-apiserver" Pod="calico-apiserver-bf55ffd57-6x6sv" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--6x6sv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--6x6sv-eth0", GenerateName:"calico-apiserver-bf55ffd57-", Namespace:"calico-apiserver", SelfLink:"", UID:"a83cb314-94c7-48be-9000-43244ee2be0f", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 44, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bf55ffd57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-d569167b40", ContainerID:"", Pod:"calico-apiserver-bf55ffd57-6x6sv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif93cbfd5167", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:45:25.799911 containerd[1819]: 2025-05-17 01:45:25.792 [INFO][6162] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="01158141b50e961138d122298ce8ef138387a57d451a941ee2466e2d732f2bae" Namespace="calico-apiserver" Pod="calico-apiserver-bf55ffd57-6x6sv" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--6x6sv-eth0" May 17 01:45:25.799911 containerd[1819]: 2025-05-17 01:45:25.792 [INFO][6162] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif93cbfd5167 ContainerID="01158141b50e961138d122298ce8ef138387a57d451a941ee2466e2d732f2bae" Namespace="calico-apiserver" Pod="calico-apiserver-bf55ffd57-6x6sv" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--6x6sv-eth0" May 17 01:45:25.799911 containerd[1819]: 2025-05-17 01:45:25.793 [INFO][6162] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="01158141b50e961138d122298ce8ef138387a57d451a941ee2466e2d732f2bae" Namespace="calico-apiserver" Pod="calico-apiserver-bf55ffd57-6x6sv" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--6x6sv-eth0" May 17 01:45:25.799911 containerd[1819]: 2025-05-17 01:45:25.793 [INFO][6162] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="01158141b50e961138d122298ce8ef138387a57d451a941ee2466e2d732f2bae" Namespace="calico-apiserver" Pod="calico-apiserver-bf55ffd57-6x6sv" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--6x6sv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--6x6sv-eth0", GenerateName:"calico-apiserver-bf55ffd57-", Namespace:"calico-apiserver", SelfLink:"", UID:"a83cb314-94c7-48be-9000-43244ee2be0f", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 44, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bf55ffd57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-d569167b40", ContainerID:"01158141b50e961138d122298ce8ef138387a57d451a941ee2466e2d732f2bae", Pod:"calico-apiserver-bf55ffd57-6x6sv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif93cbfd5167", MAC:"56:6b:54:ed:91:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:45:25.799911 containerd[1819]: 2025-05-17 01:45:25.798 [INFO][6162] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="01158141b50e961138d122298ce8ef138387a57d451a941ee2466e2d732f2bae" Namespace="calico-apiserver" Pod="calico-apiserver-bf55ffd57-6x6sv" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--6x6sv-eth0" May 17 01:45:25.807355 containerd[1819]: time="2025-05-17T01:45:25.807255451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:45:25.807556 containerd[1819]: time="2025-05-17T01:45:25.807487222Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:45:25.807556 containerd[1819]: time="2025-05-17T01:45:25.807497760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:45:25.807556 containerd[1819]: time="2025-05-17T01:45:25.807542313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:45:25.824304 kubelet[3068]: I0517 01:45:25.824246 3068 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-p8kz8" podStartSLOduration=34.824231475 podStartE2EDuration="34.824231475s" podCreationTimestamp="2025-05-17 01:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 01:45:25.824118527 +0000 UTC m=+41.180572205" watchObservedRunningTime="2025-05-17 01:45:25.824231475 +0000 UTC m=+41.180685143" May 17 01:45:25.826501 systemd[1]: Started cri-containerd-01158141b50e961138d122298ce8ef138387a57d451a941ee2466e2d732f2bae.scope - libcontainer container 01158141b50e961138d122298ce8ef138387a57d451a941ee2466e2d732f2bae. May 17 01:45:25.849737 containerd[1819]: time="2025-05-17T01:45:25.849712729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bf55ffd57-6x6sv,Uid:a83cb314-94c7-48be-9000-43244ee2be0f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"01158141b50e961138d122298ce8ef138387a57d451a941ee2466e2d732f2bae\"" May 17 01:45:25.888650 systemd-networkd[1605]: cali56b58355c98: Link UP May 17 01:45:25.888779 systemd-networkd[1605]: cali56b58355c98: Gained carrier May 17 01:45:25.894463 containerd[1819]: 2025-05-17 01:45:25.755 [INFO][6179] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 01:45:25.894463 containerd[1819]: 2025-05-17 01:45:25.760 [INFO][6179] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--n--d569167b40-k8s-goldmane--78d55f7ddc--zwf9c-eth0 goldmane-78d55f7ddc- calico-system 4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8 937 0 2025-05-17 01:44:59 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:78d55f7ddc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.3-n-d569167b40 goldmane-78d55f7ddc-zwf9c eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali56b58355c98 [] [] }} ContainerID="80811bd45bcb1bddd8226baf198d25583ab4b53eb87864bb4953a857a12d9f18" Namespace="calico-system" Pod="goldmane-78d55f7ddc-zwf9c" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-goldmane--78d55f7ddc--zwf9c-" May 17 01:45:25.894463 containerd[1819]: 2025-05-17 01:45:25.760 [INFO][6179] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="80811bd45bcb1bddd8226baf198d25583ab4b53eb87864bb4953a857a12d9f18" Namespace="calico-system" Pod="goldmane-78d55f7ddc-zwf9c" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-goldmane--78d55f7ddc--zwf9c-eth0" May 17 01:45:25.894463 containerd[1819]: 2025-05-17 01:45:25.773 [INFO][6219] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="80811bd45bcb1bddd8226baf198d25583ab4b53eb87864bb4953a857a12d9f18" HandleID="k8s-pod-network.80811bd45bcb1bddd8226baf198d25583ab4b53eb87864bb4953a857a12d9f18" Workload="ci--4081.3.3--n--d569167b40-k8s-goldmane--78d55f7ddc--zwf9c-eth0" May 17 01:45:25.894463 containerd[1819]: 2025-05-17 01:45:25.773 [INFO][6219] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="80811bd45bcb1bddd8226baf198d25583ab4b53eb87864bb4953a857a12d9f18" HandleID="k8s-pod-network.80811bd45bcb1bddd8226baf198d25583ab4b53eb87864bb4953a857a12d9f18" Workload="ci--4081.3.3--n--d569167b40-k8s-goldmane--78d55f7ddc--zwf9c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000255810), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-n-d569167b40", "pod":"goldmane-78d55f7ddc-zwf9c", "timestamp":"2025-05-17 01:45:25.773029823 +0000 UTC"}, Hostname:"ci-4081.3.3-n-d569167b40", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 01:45:25.894463 containerd[1819]: 2025-05-17 01:45:25.773 [INFO][6219] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:45:25.894463 containerd[1819]: 2025-05-17 01:45:25.789 [INFO][6219] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:45:25.894463 containerd[1819]: 2025-05-17 01:45:25.789 [INFO][6219] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-n-d569167b40' May 17 01:45:25.894463 containerd[1819]: 2025-05-17 01:45:25.873 [INFO][6219] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.80811bd45bcb1bddd8226baf198d25583ab4b53eb87864bb4953a857a12d9f18" host="ci-4081.3.3-n-d569167b40" May 17 01:45:25.894463 containerd[1819]: 2025-05-17 01:45:25.876 [INFO][6219] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.3-n-d569167b40" May 17 01:45:25.894463 containerd[1819]: 2025-05-17 01:45:25.878 [INFO][6219] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="ci-4081.3.3-n-d569167b40" May 17 01:45:25.894463 containerd[1819]: 2025-05-17 01:45:25.879 [INFO][6219] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="ci-4081.3.3-n-d569167b40" May 17 01:45:25.894463 containerd[1819]: 2025-05-17 01:45:25.881 [INFO][6219] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="ci-4081.3.3-n-d569167b40" May 17 01:45:25.894463 containerd[1819]: 2025-05-17 01:45:25.881 [INFO][6219] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.80811bd45bcb1bddd8226baf198d25583ab4b53eb87864bb4953a857a12d9f18" host="ci-4081.3.3-n-d569167b40" May 17 01:45:25.894463 containerd[1819]: 2025-05-17 01:45:25.881 [INFO][6219] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.80811bd45bcb1bddd8226baf198d25583ab4b53eb87864bb4953a857a12d9f18 May 17 01:45:25.894463 containerd[1819]: 2025-05-17 01:45:25.883 [INFO][6219] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.80811bd45bcb1bddd8226baf198d25583ab4b53eb87864bb4953a857a12d9f18" host="ci-4081.3.3-n-d569167b40" May 17 01:45:25.894463 containerd[1819]: 2025-05-17 01:45:25.886 [INFO][6219] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.80811bd45bcb1bddd8226baf198d25583ab4b53eb87864bb4953a857a12d9f18" host="ci-4081.3.3-n-d569167b40" May 17 01:45:25.894463 containerd[1819]: 2025-05-17 01:45:25.886 [INFO][6219] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.80811bd45bcb1bddd8226baf198d25583ab4b53eb87864bb4953a857a12d9f18" host="ci-4081.3.3-n-d569167b40" May 17 01:45:25.894463 containerd[1819]: 2025-05-17 01:45:25.886 [INFO][6219] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:45:25.894463 containerd[1819]: 2025-05-17 01:45:25.886 [INFO][6219] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="80811bd45bcb1bddd8226baf198d25583ab4b53eb87864bb4953a857a12d9f18" HandleID="k8s-pod-network.80811bd45bcb1bddd8226baf198d25583ab4b53eb87864bb4953a857a12d9f18" Workload="ci--4081.3.3--n--d569167b40-k8s-goldmane--78d55f7ddc--zwf9c-eth0" May 17 01:45:25.895011 containerd[1819]: 2025-05-17 01:45:25.887 [INFO][6179] cni-plugin/k8s.go 418: Populated endpoint ContainerID="80811bd45bcb1bddd8226baf198d25583ab4b53eb87864bb4953a857a12d9f18" Namespace="calico-system" Pod="goldmane-78d55f7ddc-zwf9c" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-goldmane--78d55f7ddc--zwf9c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--d569167b40-k8s-goldmane--78d55f7ddc--zwf9c-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 44, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-d569167b40", ContainerID:"", Pod:"goldmane-78d55f7ddc-zwf9c", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali56b58355c98", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:45:25.895011 containerd[1819]: 2025-05-17 01:45:25.887 [INFO][6179] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="80811bd45bcb1bddd8226baf198d25583ab4b53eb87864bb4953a857a12d9f18" Namespace="calico-system" Pod="goldmane-78d55f7ddc-zwf9c" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-goldmane--78d55f7ddc--zwf9c-eth0" May 17 01:45:25.895011 containerd[1819]: 2025-05-17 01:45:25.887 [INFO][6179] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali56b58355c98 ContainerID="80811bd45bcb1bddd8226baf198d25583ab4b53eb87864bb4953a857a12d9f18" Namespace="calico-system" Pod="goldmane-78d55f7ddc-zwf9c" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-goldmane--78d55f7ddc--zwf9c-eth0" May 17 01:45:25.895011 containerd[1819]: 2025-05-17 01:45:25.888 [INFO][6179] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="80811bd45bcb1bddd8226baf198d25583ab4b53eb87864bb4953a857a12d9f18" Namespace="calico-system" Pod="goldmane-78d55f7ddc-zwf9c" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-goldmane--78d55f7ddc--zwf9c-eth0" May 17 01:45:25.895011 containerd[1819]: 2025-05-17 01:45:25.889 [INFO][6179] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="80811bd45bcb1bddd8226baf198d25583ab4b53eb87864bb4953a857a12d9f18" Namespace="calico-system" Pod="goldmane-78d55f7ddc-zwf9c" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-goldmane--78d55f7ddc--zwf9c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--d569167b40-k8s-goldmane--78d55f7ddc--zwf9c-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 44, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-d569167b40", ContainerID:"80811bd45bcb1bddd8226baf198d25583ab4b53eb87864bb4953a857a12d9f18", Pod:"goldmane-78d55f7ddc-zwf9c", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali56b58355c98", MAC:"e6:ea:e3:55:6d:4a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:45:25.895011 containerd[1819]: 2025-05-17 01:45:25.893 [INFO][6179] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="80811bd45bcb1bddd8226baf198d25583ab4b53eb87864bb4953a857a12d9f18" Namespace="calico-system" Pod="goldmane-78d55f7ddc-zwf9c" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-goldmane--78d55f7ddc--zwf9c-eth0" May 17 01:45:25.902465 containerd[1819]: time="2025-05-17T01:45:25.902364251Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:45:25.902649 containerd[1819]: time="2025-05-17T01:45:25.902428667Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:45:25.902689 containerd[1819]: time="2025-05-17T01:45:25.902651041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:45:25.902728 containerd[1819]: time="2025-05-17T01:45:25.902707772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:45:25.920398 systemd[1]: Started cri-containerd-80811bd45bcb1bddd8226baf198d25583ab4b53eb87864bb4953a857a12d9f18.scope - libcontainer container 80811bd45bcb1bddd8226baf198d25583ab4b53eb87864bb4953a857a12d9f18. May 17 01:45:25.942612 containerd[1819]: time="2025-05-17T01:45:25.942563076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-zwf9c,Uid:4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8,Namespace:calico-system,Attempt:1,} returns sandbox id \"80811bd45bcb1bddd8226baf198d25583ab4b53eb87864bb4953a857a12d9f18\"" May 17 01:45:25.945629 containerd[1819]: time="2025-05-17T01:45:25.945609053Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:45:25.945811 containerd[1819]: time="2025-05-17T01:45:25.945791529Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=47252431" May 17 01:45:25.946196 containerd[1819]: time="2025-05-17T01:45:25.946164431Z" level=info msg="ImageCreate event name:\"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:45:25.947355 containerd[1819]: time="2025-05-17T01:45:25.947301365Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:45:25.948189 containerd[1819]: time="2025-05-17T01:45:25.948148289Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 2.939291249s" May 17 01:45:25.948189 containerd[1819]: time="2025-05-17T01:45:25.948162927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 17 01:45:25.948531 containerd[1819]: time="2025-05-17T01:45:25.948518016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 17 01:45:25.949057 containerd[1819]: time="2025-05-17T01:45:25.949043748Z" level=info msg="CreateContainer within sandbox \"0d6538a18c6d68986875f39200d6baa680bd5625acdd51a4c09b80db9b243db4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 01:45:25.952814 containerd[1819]: time="2025-05-17T01:45:25.952773456Z" level=info msg="CreateContainer within sandbox \"0d6538a18c6d68986875f39200d6baa680bd5625acdd51a4c09b80db9b243db4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ca4fdfc84af638106aced3777a866f5893253a1bae807ba64de4c7dd36bd1866\"" May 17 01:45:25.953055 containerd[1819]: time="2025-05-17T01:45:25.952994726Z" level=info msg="StartContainer for \"ca4fdfc84af638106aced3777a866f5893253a1bae807ba64de4c7dd36bd1866\"" May 17 01:45:25.982539 systemd[1]: Started cri-containerd-ca4fdfc84af638106aced3777a866f5893253a1bae807ba64de4c7dd36bd1866.scope - libcontainer container ca4fdfc84af638106aced3777a866f5893253a1bae807ba64de4c7dd36bd1866. May 17 01:45:26.011347 containerd[1819]: time="2025-05-17T01:45:26.011321124Z" level=info msg="StartContainer for \"ca4fdfc84af638106aced3777a866f5893253a1bae807ba64de4c7dd36bd1866\" returns successfully" May 17 01:45:26.175434 systemd-networkd[1605]: calif640e379973: Gained IPv6LL May 17 01:45:26.559550 systemd-networkd[1605]: calif115d9071ec: Gained IPv6LL May 17 01:45:26.852416 kubelet[3068]: I0517 01:45:26.852258 3068 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-bf55ffd57-tw6s9" podStartSLOduration=25.912436499000002 podStartE2EDuration="28.852221246s" podCreationTimestamp="2025-05-17 01:44:58 +0000 UTC" firstStartedPulling="2025-05-17 01:45:23.008669862 +0000 UTC m=+38.365123536" lastFinishedPulling="2025-05-17 01:45:25.948454613 +0000 UTC m=+41.304908283" observedRunningTime="2025-05-17 01:45:26.85165967 +0000 UTC m=+42.208113431" watchObservedRunningTime="2025-05-17 01:45:26.852221246 +0000 UTC m=+42.208674971" May 17 01:45:27.199607 systemd-networkd[1605]: calif93cbfd5167: Gained IPv6LL May 17 01:45:27.586653 kubelet[3068]: I0517 01:45:27.586567 3068 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 01:45:27.647380 systemd-networkd[1605]: cali56b58355c98: Gained IPv6LL May 17 01:45:27.654128 containerd[1819]: time="2025-05-17T01:45:27.654079437Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:45:27.654309 containerd[1819]: time="2025-05-17T01:45:27.654261760Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8758390" May 17 01:45:27.654707 containerd[1819]: time="2025-05-17T01:45:27.654665525Z" level=info msg="ImageCreate event name:\"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:45:27.655679 containerd[1819]: time="2025-05-17T01:45:27.655664221Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:45:27.656149 containerd[1819]: time="2025-05-17T01:45:27.656109731Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"10251093\" in 1.707573912s" May 17 01:45:27.656149 containerd[1819]: time="2025-05-17T01:45:27.656123867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 17 01:45:27.656796 containerd[1819]: time="2025-05-17T01:45:27.656757077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 17 01:45:27.657307 containerd[1819]: time="2025-05-17T01:45:27.657265855Z" level=info msg="CreateContainer within sandbox \"a3d834787cabe3721f03faf3384dd614b1d9d662a79c93a6138b5993d6a4e9b4\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 17 01:45:27.662138 containerd[1819]: time="2025-05-17T01:45:27.662096952Z" level=info msg="CreateContainer within sandbox \"a3d834787cabe3721f03faf3384dd614b1d9d662a79c93a6138b5993d6a4e9b4\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5a2c7eb2a326f3c17841237b5a2ff75ca4879a1ae30ebb5716e6fe8e091bfc41\"" May 17 01:45:27.662336 containerd[1819]: time="2025-05-17T01:45:27.662302769Z" level=info msg="StartContainer for \"5a2c7eb2a326f3c17841237b5a2ff75ca4879a1ae30ebb5716e6fe8e091bfc41\"" May 17 01:45:27.693834 systemd[1]: Started cri-containerd-5a2c7eb2a326f3c17841237b5a2ff75ca4879a1ae30ebb5716e6fe8e091bfc41.scope - libcontainer container 5a2c7eb2a326f3c17841237b5a2ff75ca4879a1ae30ebb5716e6fe8e091bfc41. May 17 01:45:27.755582 containerd[1819]: time="2025-05-17T01:45:27.755513707Z" level=info msg="StartContainer for \"5a2c7eb2a326f3c17841237b5a2ff75ca4879a1ae30ebb5716e6fe8e091bfc41\" returns successfully" May 17 01:45:27.844468 kubelet[3068]: I0517 01:45:27.844236 3068 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 01:45:28.261347 kernel: bpftool[6575]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 17 01:45:28.426661 systemd-networkd[1605]: vxlan.calico: Link UP May 17 01:45:28.426667 systemd-networkd[1605]: vxlan.calico: Gained carrier May 17 01:45:30.015520 systemd-networkd[1605]: vxlan.calico: Gained IPv6LL May 17 01:45:30.659937 containerd[1819]: time="2025-05-17T01:45:30.659907438Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:45:30.660152 containerd[1819]: time="2025-05-17T01:45:30.660086482Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=51178512" May 17 01:45:30.660399 containerd[1819]: time="2025-05-17T01:45:30.660386090Z" level=info msg="ImageCreate event name:\"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:45:30.661469 containerd[1819]: time="2025-05-17T01:45:30.661455294Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:45:30.661867 containerd[1819]: time="2025-05-17T01:45:30.661853132Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"52671183\" in 3.005080611s" May 17 01:45:30.661906 containerd[1819]: time="2025-05-17T01:45:30.661869211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\"" May 17 01:45:30.662327 containerd[1819]: time="2025-05-17T01:45:30.662315842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 01:45:30.665247 containerd[1819]: time="2025-05-17T01:45:30.665226517Z" level=info msg="CreateContainer within sandbox \"8255d69060339b67bba97c13ec8ebbd003182fc2cbdd5439f412eac85fafa625\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 17 01:45:30.669321 containerd[1819]: time="2025-05-17T01:45:30.669307332Z" level=info msg="CreateContainer within sandbox \"8255d69060339b67bba97c13ec8ebbd003182fc2cbdd5439f412eac85fafa625\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"6686b94a623640626472a4bbd3ae05827d117452ec9ded282696662012ea061a\"" May 17 01:45:30.669573 containerd[1819]: time="2025-05-17T01:45:30.669560621Z" level=info msg="StartContainer for \"6686b94a623640626472a4bbd3ae05827d117452ec9ded282696662012ea061a\"" May 17 01:45:30.699439 systemd[1]: Started cri-containerd-6686b94a623640626472a4bbd3ae05827d117452ec9ded282696662012ea061a.scope - libcontainer container 6686b94a623640626472a4bbd3ae05827d117452ec9ded282696662012ea061a. May 17 01:45:30.724725 containerd[1819]: time="2025-05-17T01:45:30.724699810Z" level=info msg="StartContainer for \"6686b94a623640626472a4bbd3ae05827d117452ec9ded282696662012ea061a\" returns successfully" May 17 01:45:30.965937 kubelet[3068]: I0517 01:45:30.965824 3068 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-86b6b5cd9b-tjktk" podStartSLOduration=25.167987672 podStartE2EDuration="30.965810741s" podCreationTimestamp="2025-05-17 01:45:00 +0000 UTC" firstStartedPulling="2025-05-17 01:45:24.86442947 +0000 UTC m=+40.220883144" lastFinishedPulling="2025-05-17 01:45:30.662252545 +0000 UTC m=+46.018706213" observedRunningTime="2025-05-17 01:45:30.881403407 +0000 UTC m=+46.237857182" watchObservedRunningTime="2025-05-17 01:45:30.965810741 +0000 UTC m=+46.322264412" May 17 01:45:31.099002 containerd[1819]: time="2025-05-17T01:45:31.098975285Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:45:31.099213 containerd[1819]: time="2025-05-17T01:45:31.099192073Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 17 01:45:31.100515 containerd[1819]: time="2025-05-17T01:45:31.100494542Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 438.163511ms" May 17 01:45:31.100515 containerd[1819]: time="2025-05-17T01:45:31.100510484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 17 01:45:31.101009 containerd[1819]: time="2025-05-17T01:45:31.100971430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 01:45:31.101511 containerd[1819]: time="2025-05-17T01:45:31.101496423Z" level=info msg="CreateContainer within sandbox \"01158141b50e961138d122298ce8ef138387a57d451a941ee2466e2d732f2bae\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 01:45:31.105902 containerd[1819]: time="2025-05-17T01:45:31.105860015Z" level=info msg="CreateContainer within sandbox \"01158141b50e961138d122298ce8ef138387a57d451a941ee2466e2d732f2bae\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ab2f66c2ca2d767d60f760aace880af2c5c5116dbb53f4f5c2d980b237c9a90d\"" May 17 01:45:31.106109 containerd[1819]: time="2025-05-17T01:45:31.106097468Z" level=info msg="StartContainer for \"ab2f66c2ca2d767d60f760aace880af2c5c5116dbb53f4f5c2d980b237c9a90d\"" May 17 01:45:31.142521 systemd[1]: Started cri-containerd-ab2f66c2ca2d767d60f760aace880af2c5c5116dbb53f4f5c2d980b237c9a90d.scope - libcontainer container ab2f66c2ca2d767d60f760aace880af2c5c5116dbb53f4f5c2d980b237c9a90d. May 17 01:45:31.172917 containerd[1819]: time="2025-05-17T01:45:31.172887005Z" level=info msg="StartContainer for \"ab2f66c2ca2d767d60f760aace880af2c5c5116dbb53f4f5c2d980b237c9a90d\" returns successfully" May 17 01:45:31.412447 containerd[1819]: time="2025-05-17T01:45:31.412424778Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 01:45:31.412958 containerd[1819]: time="2025-05-17T01:45:31.412943209Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 01:45:31.413015 containerd[1819]: time="2025-05-17T01:45:31.412998449Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 01:45:31.413128 kubelet[3068]: E0517 01:45:31.413074 3068 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 01:45:31.413218 kubelet[3068]: E0517 01:45:31.413137 3068 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 01:45:31.413363 containerd[1819]: time="2025-05-17T01:45:31.413350146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 17 01:45:31.413396 kubelet[3068]: E0517 01:45:31.413337 3068 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z97mm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-zwf9c_calico-system(4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 01:45:31.414485 kubelet[3068]: E0517 01:45:31.414466 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-zwf9c" podUID="4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8" May 17 01:45:31.860650 kubelet[3068]: E0517 01:45:31.860629 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-zwf9c" podUID="4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8" May 17 01:45:31.865715 kubelet[3068]: I0517 01:45:31.865681 3068 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-bf55ffd57-6x6sv" podStartSLOduration=28.61506687 podStartE2EDuration="33.865671257s" podCreationTimestamp="2025-05-17 01:44:58 +0000 UTC" firstStartedPulling="2025-05-17 01:45:25.850308432 +0000 UTC m=+41.206762103" lastFinishedPulling="2025-05-17 01:45:31.100912821 +0000 UTC m=+46.457366490" observedRunningTime="2025-05-17 01:45:31.865427571 +0000 UTC m=+47.221881242" watchObservedRunningTime="2025-05-17 01:45:31.865671257 +0000 UTC m=+47.222124925" May 17 01:45:32.864052 kubelet[3068]: I0517 01:45:32.863980 3068 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 01:45:33.291466 containerd[1819]: time="2025-05-17T01:45:33.291377707Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:45:33.291669 containerd[1819]: time="2025-05-17T01:45:33.291583673Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=14705639" May 17 01:45:33.291973 containerd[1819]: time="2025-05-17T01:45:33.291930922Z" level=info msg="ImageCreate event name:\"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:45:33.292974 containerd[1819]: time="2025-05-17T01:45:33.292924211Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 01:45:33.293710 containerd[1819]: time="2025-05-17T01:45:33.293668077Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"16198294\" in 1.880300482s" May 17 01:45:33.293710 containerd[1819]: time="2025-05-17T01:45:33.293685222Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 17 01:45:33.294172 containerd[1819]: time="2025-05-17T01:45:33.294133061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 01:45:33.294695 containerd[1819]: time="2025-05-17T01:45:33.294650764Z" level=info msg="CreateContainer within sandbox \"a3d834787cabe3721f03faf3384dd614b1d9d662a79c93a6138b5993d6a4e9b4\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 17 01:45:33.299944 containerd[1819]: time="2025-05-17T01:45:33.299900002Z" level=info msg="CreateContainer within sandbox \"a3d834787cabe3721f03faf3384dd614b1d9d662a79c93a6138b5993d6a4e9b4\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"845a635a2386df7f29b92629a5abc7045ba1383320b735510dfd252383aa11ae\"" May 17 01:45:33.300133 containerd[1819]: time="2025-05-17T01:45:33.300087514Z" level=info msg="StartContainer for \"845a635a2386df7f29b92629a5abc7045ba1383320b735510dfd252383aa11ae\"" May 17 01:45:33.319397 systemd[1]: Started cri-containerd-845a635a2386df7f29b92629a5abc7045ba1383320b735510dfd252383aa11ae.scope - libcontainer container 845a635a2386df7f29b92629a5abc7045ba1383320b735510dfd252383aa11ae. May 17 01:45:33.331557 containerd[1819]: time="2025-05-17T01:45:33.331533354Z" level=info msg="StartContainer for \"845a635a2386df7f29b92629a5abc7045ba1383320b735510dfd252383aa11ae\" returns successfully" May 17 01:45:33.609787 containerd[1819]: time="2025-05-17T01:45:33.609721261Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 01:45:33.610148 containerd[1819]: time="2025-05-17T01:45:33.610106458Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 01:45:33.610179 containerd[1819]: time="2025-05-17T01:45:33.610152634Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 01:45:33.610257 kubelet[3068]: E0517 01:45:33.610232 3068 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 01:45:33.610326 kubelet[3068]: E0517 01:45:33.610268 3068 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 01:45:33.610365 kubelet[3068]: E0517 01:45:33.610344 3068 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:92a8012b5750456bb9056172472acd21,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-md4bp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-757f968694-hv974_calico-system(b211c981-6173-4ca8-aa53-cf31a5319b90): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 01:45:33.611783 containerd[1819]: time="2025-05-17T01:45:33.611768557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 01:45:33.737164 kubelet[3068]: I0517 01:45:33.737066 3068 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 17 01:45:33.737164 kubelet[3068]: I0517 01:45:33.737133 3068 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 17 01:45:33.896842 kubelet[3068]: I0517 01:45:33.896603 3068 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-k4dvh" podStartSLOduration=24.464387255 podStartE2EDuration="33.896561836s" podCreationTimestamp="2025-05-17 01:45:00 +0000 UTC" firstStartedPulling="2025-05-17 01:45:23.861898569 +0000 UTC m=+39.218352237" lastFinishedPulling="2025-05-17 01:45:33.294073149 +0000 UTC m=+48.650526818" observedRunningTime="2025-05-17 01:45:33.895858205 +0000 UTC m=+49.252311966" watchObservedRunningTime="2025-05-17 01:45:33.896561836 +0000 UTC m=+49.253015563" May 17 01:45:33.924106 containerd[1819]: time="2025-05-17T01:45:33.924070863Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 01:45:33.924553 containerd[1819]: time="2025-05-17T01:45:33.924530090Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 01:45:33.924688 containerd[1819]: time="2025-05-17T01:45:33.924614119Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 01:45:33.924728 kubelet[3068]: E0517 01:45:33.924708 3068 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 01:45:33.924776 kubelet[3068]: E0517 01:45:33.924735 3068 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 01:45:33.924853 kubelet[3068]: E0517 01:45:33.924799 3068 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-md4bp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-757f968694-hv974_calico-system(b211c981-6173-4ca8-aa53-cf31a5319b90): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 01:45:33.926121 kubelet[3068]: E0517 01:45:33.926075 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-757f968694-hv974" podUID="b211c981-6173-4ca8-aa53-cf31a5319b90" May 17 01:45:44.693203 containerd[1819]: time="2025-05-17T01:45:44.693041173Z" level=info msg="StopPodSandbox for \"96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a\"" May 17 01:45:44.742828 containerd[1819]: 2025-05-17 01:45:44.713 [WARNING][6972] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--574w6-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"22e0dd7b-458b-49cc-aec7-e6a2e03d9deb", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 44, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-d569167b40", ContainerID:"959e356d0636c1e7fb8de7b79d808e0d91c0b5fd9503e6b6ebe8ffdbf71b4303", Pod:"coredns-668d6bf9bc-574w6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic15b171384e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:45:44.742828 containerd[1819]: 2025-05-17 01:45:44.713 [INFO][6972] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a" May 17 01:45:44.742828 containerd[1819]: 2025-05-17 01:45:44.713 [INFO][6972] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a" iface="eth0" netns="" May 17 01:45:44.742828 containerd[1819]: 2025-05-17 01:45:44.713 [INFO][6972] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a" May 17 01:45:44.742828 containerd[1819]: 2025-05-17 01:45:44.713 [INFO][6972] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a" May 17 01:45:44.742828 containerd[1819]: 2025-05-17 01:45:44.726 [INFO][6990] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a" HandleID="k8s-pod-network.96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a" Workload="ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--574w6-eth0" May 17 01:45:44.742828 containerd[1819]: 2025-05-17 01:45:44.726 [INFO][6990] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:45:44.742828 containerd[1819]: 2025-05-17 01:45:44.726 [INFO][6990] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:45:44.742828 containerd[1819]: 2025-05-17 01:45:44.732 [WARNING][6990] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a" HandleID="k8s-pod-network.96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a" Workload="ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--574w6-eth0" May 17 01:45:44.742828 containerd[1819]: 2025-05-17 01:45:44.732 [INFO][6990] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a" HandleID="k8s-pod-network.96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a" Workload="ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--574w6-eth0" May 17 01:45:44.742828 containerd[1819]: 2025-05-17 01:45:44.736 [INFO][6990] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:45:44.742828 containerd[1819]: 2025-05-17 01:45:44.739 [INFO][6972] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a" May 17 01:45:44.744641 containerd[1819]: time="2025-05-17T01:45:44.742899906Z" level=info msg="TearDown network for sandbox \"96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a\" successfully" May 17 01:45:44.744641 containerd[1819]: time="2025-05-17T01:45:44.742961325Z" level=info msg="StopPodSandbox for \"96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a\" returns successfully" May 17 01:45:44.744641 containerd[1819]: time="2025-05-17T01:45:44.744027356Z" level=info msg="RemovePodSandbox for \"96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a\"" May 17 01:45:44.744641 containerd[1819]: time="2025-05-17T01:45:44.744115256Z" level=info msg="Forcibly stopping sandbox \"96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a\"" May 17 01:45:44.879355 containerd[1819]: 2025-05-17 01:45:44.823 [WARNING][7017] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--574w6-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"22e0dd7b-458b-49cc-aec7-e6a2e03d9deb", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 44, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-d569167b40", ContainerID:"959e356d0636c1e7fb8de7b79d808e0d91c0b5fd9503e6b6ebe8ffdbf71b4303", Pod:"coredns-668d6bf9bc-574w6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic15b171384e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:45:44.879355 containerd[1819]: 2025-05-17 01:45:44.823 [INFO][7017] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a" May 17 01:45:44.879355 containerd[1819]: 2025-05-17 01:45:44.823 [INFO][7017] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a" iface="eth0" netns="" May 17 01:45:44.879355 containerd[1819]: 2025-05-17 01:45:44.823 [INFO][7017] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a" May 17 01:45:44.879355 containerd[1819]: 2025-05-17 01:45:44.823 [INFO][7017] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a" May 17 01:45:44.879355 containerd[1819]: 2025-05-17 01:45:44.859 [INFO][7035] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a" HandleID="k8s-pod-network.96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a" Workload="ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--574w6-eth0" May 17 01:45:44.879355 containerd[1819]: 2025-05-17 01:45:44.859 [INFO][7035] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:45:44.879355 containerd[1819]: 2025-05-17 01:45:44.859 [INFO][7035] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:45:44.879355 containerd[1819]: 2025-05-17 01:45:44.871 [WARNING][7035] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a" HandleID="k8s-pod-network.96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a" Workload="ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--574w6-eth0" May 17 01:45:44.879355 containerd[1819]: 2025-05-17 01:45:44.871 [INFO][7035] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a" HandleID="k8s-pod-network.96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a" Workload="ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--574w6-eth0" May 17 01:45:44.879355 containerd[1819]: 2025-05-17 01:45:44.874 [INFO][7035] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:45:44.879355 containerd[1819]: 2025-05-17 01:45:44.877 [INFO][7017] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a" May 17 01:45:44.880428 containerd[1819]: time="2025-05-17T01:45:44.879418602Z" level=info msg="TearDown network for sandbox \"96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a\" successfully" May 17 01:45:44.882598 containerd[1819]: time="2025-05-17T01:45:44.882552340Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 01:45:44.882598 containerd[1819]: time="2025-05-17T01:45:44.882582365Z" level=info msg="RemovePodSandbox \"96a3480d46944e12b2a79c9bc1ee2b5c8db4f3cfd4e957f319ccf7703daeab1a\" returns successfully" May 17 01:45:44.882977 containerd[1819]: time="2025-05-17T01:45:44.882930546Z" level=info msg="StopPodSandbox for \"8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6\"" May 17 01:45:44.916236 containerd[1819]: 2025-05-17 01:45:44.899 [WARNING][7061] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--d569167b40-k8s-csi--node--driver--k4dvh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"64a6b624-b75c-46f6-8f62-c89636ac29be", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 45, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-d569167b40", ContainerID:"a3d834787cabe3721f03faf3384dd614b1d9d662a79c93a6138b5993d6a4e9b4", Pod:"csi-node-driver-k4dvh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidf10bc8ef8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:45:44.916236 containerd[1819]: 2025-05-17 01:45:44.899 [INFO][7061] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6" May 17 01:45:44.916236 containerd[1819]: 2025-05-17 01:45:44.899 [INFO][7061] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6" iface="eth0" netns="" May 17 01:45:44.916236 containerd[1819]: 2025-05-17 01:45:44.899 [INFO][7061] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6" May 17 01:45:44.916236 containerd[1819]: 2025-05-17 01:45:44.899 [INFO][7061] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6" May 17 01:45:44.916236 containerd[1819]: 2025-05-17 01:45:44.909 [INFO][7077] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6" HandleID="k8s-pod-network.8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6" Workload="ci--4081.3.3--n--d569167b40-k8s-csi--node--driver--k4dvh-eth0" May 17 01:45:44.916236 containerd[1819]: 2025-05-17 01:45:44.909 [INFO][7077] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:45:44.916236 containerd[1819]: 2025-05-17 01:45:44.909 [INFO][7077] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:45:44.916236 containerd[1819]: 2025-05-17 01:45:44.913 [WARNING][7077] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6" HandleID="k8s-pod-network.8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6" Workload="ci--4081.3.3--n--d569167b40-k8s-csi--node--driver--k4dvh-eth0" May 17 01:45:44.916236 containerd[1819]: 2025-05-17 01:45:44.913 [INFO][7077] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6" HandleID="k8s-pod-network.8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6" Workload="ci--4081.3.3--n--d569167b40-k8s-csi--node--driver--k4dvh-eth0" May 17 01:45:44.916236 containerd[1819]: 2025-05-17 01:45:44.914 [INFO][7077] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:45:44.916236 containerd[1819]: 2025-05-17 01:45:44.915 [INFO][7061] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6" May 17 01:45:44.916578 containerd[1819]: time="2025-05-17T01:45:44.916254414Z" level=info msg="TearDown network for sandbox \"8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6\" successfully" May 17 01:45:44.916578 containerd[1819]: time="2025-05-17T01:45:44.916269214Z" level=info msg="StopPodSandbox for \"8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6\" returns successfully" May 17 01:45:44.916578 containerd[1819]: time="2025-05-17T01:45:44.916459435Z" level=info msg="RemovePodSandbox for \"8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6\"" May 17 01:45:44.916578 containerd[1819]: time="2025-05-17T01:45:44.916472840Z" level=info msg="Forcibly stopping sandbox \"8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6\"" May 17 01:45:44.950560 containerd[1819]: 2025-05-17 01:45:44.933 [WARNING][7103] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--d569167b40-k8s-csi--node--driver--k4dvh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"64a6b624-b75c-46f6-8f62-c89636ac29be", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 45, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-d569167b40", ContainerID:"a3d834787cabe3721f03faf3384dd614b1d9d662a79c93a6138b5993d6a4e9b4", Pod:"csi-node-driver-k4dvh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidf10bc8ef8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:45:44.950560 containerd[1819]: 2025-05-17 01:45:44.933 [INFO][7103] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6" May 17 01:45:44.950560 containerd[1819]: 2025-05-17 01:45:44.933 [INFO][7103] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6" iface="eth0" netns="" May 17 01:45:44.950560 containerd[1819]: 2025-05-17 01:45:44.933 [INFO][7103] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6" May 17 01:45:44.950560 containerd[1819]: 2025-05-17 01:45:44.933 [INFO][7103] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6" May 17 01:45:44.950560 containerd[1819]: 2025-05-17 01:45:44.944 [INFO][7119] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6" HandleID="k8s-pod-network.8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6" Workload="ci--4081.3.3--n--d569167b40-k8s-csi--node--driver--k4dvh-eth0" May 17 01:45:44.950560 containerd[1819]: 2025-05-17 01:45:44.944 [INFO][7119] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:45:44.950560 containerd[1819]: 2025-05-17 01:45:44.944 [INFO][7119] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:45:44.950560 containerd[1819]: 2025-05-17 01:45:44.948 [WARNING][7119] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6" HandleID="k8s-pod-network.8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6" Workload="ci--4081.3.3--n--d569167b40-k8s-csi--node--driver--k4dvh-eth0" May 17 01:45:44.950560 containerd[1819]: 2025-05-17 01:45:44.948 [INFO][7119] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6" HandleID="k8s-pod-network.8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6" Workload="ci--4081.3.3--n--d569167b40-k8s-csi--node--driver--k4dvh-eth0" May 17 01:45:44.950560 containerd[1819]: 2025-05-17 01:45:44.949 [INFO][7119] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:45:44.950560 containerd[1819]: 2025-05-17 01:45:44.949 [INFO][7103] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6" May 17 01:45:44.950560 containerd[1819]: time="2025-05-17T01:45:44.950548468Z" level=info msg="TearDown network for sandbox \"8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6\" successfully" May 17 01:45:44.952072 containerd[1819]: time="2025-05-17T01:45:44.952058592Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 01:45:44.952111 containerd[1819]: time="2025-05-17T01:45:44.952085190Z" level=info msg="RemovePodSandbox \"8c934f85aacc4c831c93795f3dbcd112bc91fd0fa47bfcfef0b915ba7728bac6\" returns successfully" May 17 01:45:44.952345 containerd[1819]: time="2025-05-17T01:45:44.952333304Z" level=info msg="StopPodSandbox for \"51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c\"" May 17 01:45:44.985459 containerd[1819]: 2025-05-17 01:45:44.969 [WARNING][7142] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--d569167b40-k8s-calico--kube--controllers--86b6b5cd9b--tjktk-eth0", GenerateName:"calico-kube-controllers-86b6b5cd9b-", Namespace:"calico-system", SelfLink:"", UID:"b0cc1622-d561-4a8d-9dde-c3b01983d270", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 45, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86b6b5cd9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-d569167b40", ContainerID:"8255d69060339b67bba97c13ec8ebbd003182fc2cbdd5439f412eac85fafa625", Pod:"calico-kube-controllers-86b6b5cd9b-tjktk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif640e379973", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:45:44.985459 containerd[1819]: 2025-05-17 01:45:44.969 [INFO][7142] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c" May 17 01:45:44.985459 containerd[1819]: 2025-05-17 01:45:44.969 [INFO][7142] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c" iface="eth0" netns="" May 17 01:45:44.985459 containerd[1819]: 2025-05-17 01:45:44.969 [INFO][7142] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c" May 17 01:45:44.985459 containerd[1819]: 2025-05-17 01:45:44.969 [INFO][7142] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c" May 17 01:45:44.985459 containerd[1819]: 2025-05-17 01:45:44.979 [INFO][7158] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c" HandleID="k8s-pod-network.51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c" Workload="ci--4081.3.3--n--d569167b40-k8s-calico--kube--controllers--86b6b5cd9b--tjktk-eth0" May 17 01:45:44.985459 containerd[1819]: 2025-05-17 01:45:44.979 [INFO][7158] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:45:44.985459 containerd[1819]: 2025-05-17 01:45:44.979 [INFO][7158] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:45:44.985459 containerd[1819]: 2025-05-17 01:45:44.983 [WARNING][7158] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c" HandleID="k8s-pod-network.51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c" Workload="ci--4081.3.3--n--d569167b40-k8s-calico--kube--controllers--86b6b5cd9b--tjktk-eth0" May 17 01:45:44.985459 containerd[1819]: 2025-05-17 01:45:44.983 [INFO][7158] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c" HandleID="k8s-pod-network.51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c" Workload="ci--4081.3.3--n--d569167b40-k8s-calico--kube--controllers--86b6b5cd9b--tjktk-eth0" May 17 01:45:44.985459 containerd[1819]: 2025-05-17 01:45:44.984 [INFO][7158] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:45:44.985459 containerd[1819]: 2025-05-17 01:45:44.984 [INFO][7142] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c" May 17 01:45:44.985459 containerd[1819]: time="2025-05-17T01:45:44.985455259Z" level=info msg="TearDown network for sandbox \"51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c\" successfully" May 17 01:45:44.985806 containerd[1819]: time="2025-05-17T01:45:44.985471974Z" level=info msg="StopPodSandbox for \"51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c\" returns successfully" May 17 01:45:44.985806 containerd[1819]: time="2025-05-17T01:45:44.985752878Z" level=info msg="RemovePodSandbox for \"51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c\"" May 17 01:45:44.985806 containerd[1819]: time="2025-05-17T01:45:44.985768089Z" level=info msg="Forcibly stopping sandbox \"51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c\"" May 17 01:45:45.020894 containerd[1819]: 2025-05-17 01:45:45.003 [WARNING][7181] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--d569167b40-k8s-calico--kube--controllers--86b6b5cd9b--tjktk-eth0", GenerateName:"calico-kube-controllers-86b6b5cd9b-", Namespace:"calico-system", SelfLink:"", UID:"b0cc1622-d561-4a8d-9dde-c3b01983d270", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 45, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86b6b5cd9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-d569167b40", ContainerID:"8255d69060339b67bba97c13ec8ebbd003182fc2cbdd5439f412eac85fafa625", Pod:"calico-kube-controllers-86b6b5cd9b-tjktk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif640e379973", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:45:45.020894 containerd[1819]: 2025-05-17 01:45:45.003 [INFO][7181] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c" May 17 01:45:45.020894 containerd[1819]: 2025-05-17 01:45:45.003 [INFO][7181] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c" iface="eth0" netns="" May 17 01:45:45.020894 containerd[1819]: 2025-05-17 01:45:45.003 [INFO][7181] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c" May 17 01:45:45.020894 containerd[1819]: 2025-05-17 01:45:45.003 [INFO][7181] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c" May 17 01:45:45.020894 containerd[1819]: 2025-05-17 01:45:45.013 [INFO][7195] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c" HandleID="k8s-pod-network.51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c" Workload="ci--4081.3.3--n--d569167b40-k8s-calico--kube--controllers--86b6b5cd9b--tjktk-eth0" May 17 01:45:45.020894 containerd[1819]: 2025-05-17 01:45:45.013 [INFO][7195] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:45:45.020894 containerd[1819]: 2025-05-17 01:45:45.013 [INFO][7195] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:45:45.020894 containerd[1819]: 2025-05-17 01:45:45.018 [WARNING][7195] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c" HandleID="k8s-pod-network.51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c" Workload="ci--4081.3.3--n--d569167b40-k8s-calico--kube--controllers--86b6b5cd9b--tjktk-eth0" May 17 01:45:45.020894 containerd[1819]: 2025-05-17 01:45:45.018 [INFO][7195] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c" HandleID="k8s-pod-network.51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c" Workload="ci--4081.3.3--n--d569167b40-k8s-calico--kube--controllers--86b6b5cd9b--tjktk-eth0" May 17 01:45:45.020894 containerd[1819]: 2025-05-17 01:45:45.019 [INFO][7195] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:45:45.020894 containerd[1819]: 2025-05-17 01:45:45.020 [INFO][7181] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c" May 17 01:45:45.020894 containerd[1819]: time="2025-05-17T01:45:45.020890107Z" level=info msg="TearDown network for sandbox \"51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c\" successfully" May 17 01:45:45.022452 containerd[1819]: time="2025-05-17T01:45:45.022396913Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 01:45:45.022452 containerd[1819]: time="2025-05-17T01:45:45.022425134Z" level=info msg="RemovePodSandbox \"51f8ef5867439567eb5e5cad24cfaf00c9ca8bf8939ddcc42e0556bb70ceea0c\" returns successfully" May 17 01:45:45.022709 containerd[1819]: time="2025-05-17T01:45:45.022662456Z" level=info msg="StopPodSandbox for \"2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3\"" May 17 01:45:45.060426 containerd[1819]: 2025-05-17 01:45:45.041 [WARNING][7218] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--tw6s9-eth0", GenerateName:"calico-apiserver-bf55ffd57-", Namespace:"calico-apiserver", SelfLink:"", UID:"1fc7e728-2787-4a01-a1fd-dfaad847d529", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 44, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bf55ffd57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-d569167b40", ContainerID:"0d6538a18c6d68986875f39200d6baa680bd5625acdd51a4c09b80db9b243db4", Pod:"calico-apiserver-bf55ffd57-tw6s9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibeef9903bb8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:45:45.060426 containerd[1819]: 2025-05-17 01:45:45.041 [INFO][7218] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3" May 17 01:45:45.060426 containerd[1819]: 2025-05-17 01:45:45.041 [INFO][7218] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3" iface="eth0" netns="" May 17 01:45:45.060426 containerd[1819]: 2025-05-17 01:45:45.041 [INFO][7218] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3" May 17 01:45:45.060426 containerd[1819]: 2025-05-17 01:45:45.041 [INFO][7218] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3" May 17 01:45:45.060426 containerd[1819]: 2025-05-17 01:45:45.052 [INFO][7236] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3" HandleID="k8s-pod-network.2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3" Workload="ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--tw6s9-eth0" May 17 01:45:45.060426 containerd[1819]: 2025-05-17 01:45:45.052 [INFO][7236] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:45:45.060426 containerd[1819]: 2025-05-17 01:45:45.052 [INFO][7236] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:45:45.060426 containerd[1819]: 2025-05-17 01:45:45.057 [WARNING][7236] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3" HandleID="k8s-pod-network.2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3" Workload="ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--tw6s9-eth0" May 17 01:45:45.060426 containerd[1819]: 2025-05-17 01:45:45.057 [INFO][7236] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3" HandleID="k8s-pod-network.2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3" Workload="ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--tw6s9-eth0" May 17 01:45:45.060426 containerd[1819]: 2025-05-17 01:45:45.058 [INFO][7236] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:45:45.060426 containerd[1819]: 2025-05-17 01:45:45.059 [INFO][7218] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3" May 17 01:45:45.060426 containerd[1819]: time="2025-05-17T01:45:45.060423009Z" level=info msg="TearDown network for sandbox \"2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3\" successfully" May 17 01:45:45.060793 containerd[1819]: time="2025-05-17T01:45:45.060441231Z" level=info msg="StopPodSandbox for \"2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3\" returns successfully" May 17 01:45:45.060793 containerd[1819]: time="2025-05-17T01:45:45.060744132Z" level=info msg="RemovePodSandbox for \"2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3\"" May 17 01:45:45.060793 containerd[1819]: time="2025-05-17T01:45:45.060763864Z" level=info msg="Forcibly stopping sandbox \"2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3\"" May 17 01:45:45.103336 containerd[1819]: 2025-05-17 01:45:45.081 [WARNING][7260] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--tw6s9-eth0", GenerateName:"calico-apiserver-bf55ffd57-", Namespace:"calico-apiserver", SelfLink:"", UID:"1fc7e728-2787-4a01-a1fd-dfaad847d529", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 44, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bf55ffd57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-d569167b40", ContainerID:"0d6538a18c6d68986875f39200d6baa680bd5625acdd51a4c09b80db9b243db4", Pod:"calico-apiserver-bf55ffd57-tw6s9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibeef9903bb8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:45:45.103336 containerd[1819]: 2025-05-17 01:45:45.081 [INFO][7260] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3" May 17 01:45:45.103336 containerd[1819]: 2025-05-17 01:45:45.081 [INFO][7260] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3" iface="eth0" netns="" May 17 01:45:45.103336 containerd[1819]: 2025-05-17 01:45:45.081 [INFO][7260] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3" May 17 01:45:45.103336 containerd[1819]: 2025-05-17 01:45:45.081 [INFO][7260] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3" May 17 01:45:45.103336 containerd[1819]: 2025-05-17 01:45:45.093 [INFO][7276] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3" HandleID="k8s-pod-network.2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3" Workload="ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--tw6s9-eth0" May 17 01:45:45.103336 containerd[1819]: 2025-05-17 01:45:45.094 [INFO][7276] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:45:45.103336 containerd[1819]: 2025-05-17 01:45:45.094 [INFO][7276] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:45:45.103336 containerd[1819]: 2025-05-17 01:45:45.099 [WARNING][7276] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3" HandleID="k8s-pod-network.2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3" Workload="ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--tw6s9-eth0" May 17 01:45:45.103336 containerd[1819]: 2025-05-17 01:45:45.099 [INFO][7276] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3" HandleID="k8s-pod-network.2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3" Workload="ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--tw6s9-eth0" May 17 01:45:45.103336 containerd[1819]: 2025-05-17 01:45:45.101 [INFO][7276] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:45:45.103336 containerd[1819]: 2025-05-17 01:45:45.102 [INFO][7260] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3" May 17 01:45:45.103817 containerd[1819]: time="2025-05-17T01:45:45.103371668Z" level=info msg="TearDown network for sandbox \"2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3\" successfully" May 17 01:45:45.105352 containerd[1819]: time="2025-05-17T01:45:45.105336472Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 01:45:45.105390 containerd[1819]: time="2025-05-17T01:45:45.105365089Z" level=info msg="RemovePodSandbox \"2cefbda055611f8b6f6b7d38e9ad4530c4b88c8a25499d5c50c70ba6119c49a3\" returns successfully" May 17 01:45:45.105646 containerd[1819]: time="2025-05-17T01:45:45.105628006Z" level=info msg="StopPodSandbox for \"94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4\"" May 17 01:45:45.139062 containerd[1819]: 2025-05-17 01:45:45.122 [WARNING][7301] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-whisker--7b6c6ddd44--8lgrj-eth0" May 17 01:45:45.139062 containerd[1819]: 2025-05-17 01:45:45.122 [INFO][7301] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4" May 17 01:45:45.139062 containerd[1819]: 2025-05-17 01:45:45.122 [INFO][7301] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4" iface="eth0" netns="" May 17 01:45:45.139062 containerd[1819]: 2025-05-17 01:45:45.122 [INFO][7301] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4" May 17 01:45:45.139062 containerd[1819]: 2025-05-17 01:45:45.122 [INFO][7301] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4" May 17 01:45:45.139062 containerd[1819]: 2025-05-17 01:45:45.132 [INFO][7317] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4" HandleID="k8s-pod-network.94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4" Workload="ci--4081.3.3--n--d569167b40-k8s-whisker--7b6c6ddd44--8lgrj-eth0" May 17 01:45:45.139062 containerd[1819]: 2025-05-17 01:45:45.132 [INFO][7317] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:45:45.139062 containerd[1819]: 2025-05-17 01:45:45.132 [INFO][7317] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:45:45.139062 containerd[1819]: 2025-05-17 01:45:45.136 [WARNING][7317] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4" HandleID="k8s-pod-network.94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4" Workload="ci--4081.3.3--n--d569167b40-k8s-whisker--7b6c6ddd44--8lgrj-eth0" May 17 01:45:45.139062 containerd[1819]: 2025-05-17 01:45:45.136 [INFO][7317] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4" HandleID="k8s-pod-network.94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4" Workload="ci--4081.3.3--n--d569167b40-k8s-whisker--7b6c6ddd44--8lgrj-eth0" May 17 01:45:45.139062 containerd[1819]: 2025-05-17 01:45:45.137 [INFO][7317] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:45:45.139062 containerd[1819]: 2025-05-17 01:45:45.138 [INFO][7301] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4" May 17 01:45:45.139062 containerd[1819]: time="2025-05-17T01:45:45.139050038Z" level=info msg="TearDown network for sandbox \"94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4\" successfully" May 17 01:45:45.139062 containerd[1819]: time="2025-05-17T01:45:45.139064876Z" level=info msg="StopPodSandbox for \"94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4\" returns successfully" May 17 01:45:45.139368 containerd[1819]: time="2025-05-17T01:45:45.139307665Z" level=info msg="RemovePodSandbox for \"94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4\"" May 17 01:45:45.139368 containerd[1819]: time="2025-05-17T01:45:45.139320516Z" level=info msg="Forcibly stopping sandbox \"94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4\"" May 17 01:45:45.173479 containerd[1819]: 2025-05-17 01:45:45.155 [WARNING][7341] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4" WorkloadEndpoint="ci--4081.3.3--n--d569167b40-k8s-whisker--7b6c6ddd44--8lgrj-eth0" May 17 01:45:45.173479 containerd[1819]: 2025-05-17 01:45:45.155 [INFO][7341] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4" May 17 01:45:45.173479 containerd[1819]: 2025-05-17 01:45:45.155 [INFO][7341] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4" iface="eth0" netns="" May 17 01:45:45.173479 containerd[1819]: 2025-05-17 01:45:45.155 [INFO][7341] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4" May 17 01:45:45.173479 containerd[1819]: 2025-05-17 01:45:45.155 [INFO][7341] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4" May 17 01:45:45.173479 containerd[1819]: 2025-05-17 01:45:45.165 [INFO][7353] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4" HandleID="k8s-pod-network.94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4" Workload="ci--4081.3.3--n--d569167b40-k8s-whisker--7b6c6ddd44--8lgrj-eth0" May 17 01:45:45.173479 containerd[1819]: 2025-05-17 01:45:45.165 [INFO][7353] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:45:45.173479 containerd[1819]: 2025-05-17 01:45:45.166 [INFO][7353] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:45:45.173479 containerd[1819]: 2025-05-17 01:45:45.170 [WARNING][7353] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4" HandleID="k8s-pod-network.94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4" Workload="ci--4081.3.3--n--d569167b40-k8s-whisker--7b6c6ddd44--8lgrj-eth0" May 17 01:45:45.173479 containerd[1819]: 2025-05-17 01:45:45.170 [INFO][7353] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4" HandleID="k8s-pod-network.94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4" Workload="ci--4081.3.3--n--d569167b40-k8s-whisker--7b6c6ddd44--8lgrj-eth0" May 17 01:45:45.173479 containerd[1819]: 2025-05-17 01:45:45.171 [INFO][7353] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:45:45.173479 containerd[1819]: 2025-05-17 01:45:45.172 [INFO][7341] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4" May 17 01:45:45.173792 containerd[1819]: time="2025-05-17T01:45:45.173507318Z" level=info msg="TearDown network for sandbox \"94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4\" successfully" May 17 01:45:45.175148 containerd[1819]: time="2025-05-17T01:45:45.175107109Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 01:45:45.175148 containerd[1819]: time="2025-05-17T01:45:45.175134635Z" level=info msg="RemovePodSandbox \"94f96b2540f30fa9b3871d1915676e56593be569cf100d216e01a162bd755fc4\" returns successfully" May 17 01:45:45.175424 containerd[1819]: time="2025-05-17T01:45:45.175411284Z" level=info msg="StopPodSandbox for \"3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2\"" May 17 01:45:45.211333 containerd[1819]: 2025-05-17 01:45:45.193 [WARNING][7384] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--p8kz8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3af411a3-09ba-4381-bce5-19753bf3d671", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 44, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-d569167b40", ContainerID:"cc5802903811a884a4c0b7b8b6a037e83bd511c8bcbb9b1faf400eb6326d5fa2", Pod:"coredns-668d6bf9bc-p8kz8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif115d9071ec", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:45:45.211333 containerd[1819]: 2025-05-17 01:45:45.194 [INFO][7384] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2" May 17 01:45:45.211333 containerd[1819]: 2025-05-17 01:45:45.194 [INFO][7384] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2" iface="eth0" netns="" May 17 01:45:45.211333 containerd[1819]: 2025-05-17 01:45:45.194 [INFO][7384] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2" May 17 01:45:45.211333 containerd[1819]: 2025-05-17 01:45:45.194 [INFO][7384] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2" May 17 01:45:45.211333 containerd[1819]: 2025-05-17 01:45:45.204 [INFO][7399] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2" HandleID="k8s-pod-network.3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2" Workload="ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--p8kz8-eth0" May 17 01:45:45.211333 containerd[1819]: 2025-05-17 01:45:45.204 [INFO][7399] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:45:45.211333 containerd[1819]: 2025-05-17 01:45:45.204 [INFO][7399] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:45:45.211333 containerd[1819]: 2025-05-17 01:45:45.208 [WARNING][7399] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2" HandleID="k8s-pod-network.3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2" Workload="ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--p8kz8-eth0" May 17 01:45:45.211333 containerd[1819]: 2025-05-17 01:45:45.208 [INFO][7399] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2" HandleID="k8s-pod-network.3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2" Workload="ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--p8kz8-eth0" May 17 01:45:45.211333 containerd[1819]: 2025-05-17 01:45:45.209 [INFO][7399] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:45:45.211333 containerd[1819]: 2025-05-17 01:45:45.210 [INFO][7384] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2" May 17 01:45:45.211333 containerd[1819]: time="2025-05-17T01:45:45.211318093Z" level=info msg="TearDown network for sandbox \"3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2\" successfully" May 17 01:45:45.211663 containerd[1819]: time="2025-05-17T01:45:45.211334334Z" level=info msg="StopPodSandbox for \"3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2\" returns successfully" May 17 01:45:45.211663 containerd[1819]: time="2025-05-17T01:45:45.211617849Z" level=info msg="RemovePodSandbox for \"3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2\"" May 17 01:45:45.211663 containerd[1819]: time="2025-05-17T01:45:45.211633131Z" level=info msg="Forcibly stopping sandbox \"3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2\"" May 17 01:45:45.245611 containerd[1819]: 2025-05-17 01:45:45.229 [WARNING][7423] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--p8kz8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3af411a3-09ba-4381-bce5-19753bf3d671", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 44, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-d569167b40", ContainerID:"cc5802903811a884a4c0b7b8b6a037e83bd511c8bcbb9b1faf400eb6326d5fa2", Pod:"coredns-668d6bf9bc-p8kz8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif115d9071ec", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:45:45.245611 containerd[1819]: 2025-05-17 01:45:45.229 [INFO][7423] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2" May 17 01:45:45.245611 containerd[1819]: 2025-05-17 01:45:45.229 [INFO][7423] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2" iface="eth0" netns="" May 17 01:45:45.245611 containerd[1819]: 2025-05-17 01:45:45.229 [INFO][7423] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2" May 17 01:45:45.245611 containerd[1819]: 2025-05-17 01:45:45.229 [INFO][7423] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2" May 17 01:45:45.245611 containerd[1819]: 2025-05-17 01:45:45.238 [INFO][7438] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2" HandleID="k8s-pod-network.3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2" Workload="ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--p8kz8-eth0" May 17 01:45:45.245611 containerd[1819]: 2025-05-17 01:45:45.238 [INFO][7438] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:45:45.245611 containerd[1819]: 2025-05-17 01:45:45.238 [INFO][7438] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:45:45.245611 containerd[1819]: 2025-05-17 01:45:45.243 [WARNING][7438] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2" HandleID="k8s-pod-network.3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2" Workload="ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--p8kz8-eth0" May 17 01:45:45.245611 containerd[1819]: 2025-05-17 01:45:45.243 [INFO][7438] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2" HandleID="k8s-pod-network.3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2" Workload="ci--4081.3.3--n--d569167b40-k8s-coredns--668d6bf9bc--p8kz8-eth0" May 17 01:45:45.245611 containerd[1819]: 2025-05-17 01:45:45.244 [INFO][7438] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:45:45.245611 containerd[1819]: 2025-05-17 01:45:45.244 [INFO][7423] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2" May 17 01:45:45.245922 containerd[1819]: time="2025-05-17T01:45:45.245635039Z" level=info msg="TearDown network for sandbox \"3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2\" successfully" May 17 01:45:45.247081 containerd[1819]: time="2025-05-17T01:45:45.247040719Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 01:45:45.247081 containerd[1819]: time="2025-05-17T01:45:45.247067471Z" level=info msg="RemovePodSandbox \"3ddc52a2165dee1d71ff734f7db0d01a77b424dee0d06bfaac736a910d4fbdb2\" returns successfully" May 17 01:45:45.247349 containerd[1819]: time="2025-05-17T01:45:45.247307326Z" level=info msg="StopPodSandbox for \"ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f\"" May 17 01:45:45.283297 containerd[1819]: 2025-05-17 01:45:45.265 [WARNING][7458] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--d569167b40-k8s-goldmane--78d55f7ddc--zwf9c-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 44, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-d569167b40", ContainerID:"80811bd45bcb1bddd8226baf198d25583ab4b53eb87864bb4953a857a12d9f18", Pod:"goldmane-78d55f7ddc-zwf9c", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali56b58355c98", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:45:45.283297 containerd[1819]: 2025-05-17 01:45:45.265 [INFO][7458] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f" May 17 01:45:45.283297 containerd[1819]: 2025-05-17 01:45:45.265 [INFO][7458] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f" iface="eth0" netns="" May 17 01:45:45.283297 containerd[1819]: 2025-05-17 01:45:45.265 [INFO][7458] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f" May 17 01:45:45.283297 containerd[1819]: 2025-05-17 01:45:45.265 [INFO][7458] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f" May 17 01:45:45.283297 containerd[1819]: 2025-05-17 01:45:45.276 [INFO][7477] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f" HandleID="k8s-pod-network.ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f" Workload="ci--4081.3.3--n--d569167b40-k8s-goldmane--78d55f7ddc--zwf9c-eth0" May 17 01:45:45.283297 containerd[1819]: 2025-05-17 01:45:45.276 [INFO][7477] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:45:45.283297 containerd[1819]: 2025-05-17 01:45:45.276 [INFO][7477] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:45:45.283297 containerd[1819]: 2025-05-17 01:45:45.280 [WARNING][7477] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f" HandleID="k8s-pod-network.ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f" Workload="ci--4081.3.3--n--d569167b40-k8s-goldmane--78d55f7ddc--zwf9c-eth0" May 17 01:45:45.283297 containerd[1819]: 2025-05-17 01:45:45.280 [INFO][7477] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f" HandleID="k8s-pod-network.ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f" Workload="ci--4081.3.3--n--d569167b40-k8s-goldmane--78d55f7ddc--zwf9c-eth0" May 17 01:45:45.283297 containerd[1819]: 2025-05-17 01:45:45.281 [INFO][7477] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:45:45.283297 containerd[1819]: 2025-05-17 01:45:45.282 [INFO][7458] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f" May 17 01:45:45.283618 containerd[1819]: time="2025-05-17T01:45:45.283320172Z" level=info msg="TearDown network for sandbox \"ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f\" successfully" May 17 01:45:45.283618 containerd[1819]: time="2025-05-17T01:45:45.283337950Z" level=info msg="StopPodSandbox for \"ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f\" returns successfully" May 17 01:45:45.283708 containerd[1819]: time="2025-05-17T01:45:45.283667236Z" level=info msg="RemovePodSandbox for \"ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f\"" May 17 01:45:45.283708 containerd[1819]: time="2025-05-17T01:45:45.283684051Z" level=info msg="Forcibly stopping sandbox \"ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f\"" May 17 01:45:45.323536 containerd[1819]: 2025-05-17 01:45:45.302 [WARNING][7504] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--d569167b40-k8s-goldmane--78d55f7ddc--zwf9c-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 44, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-d569167b40", ContainerID:"80811bd45bcb1bddd8226baf198d25583ab4b53eb87864bb4953a857a12d9f18", Pod:"goldmane-78d55f7ddc-zwf9c", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali56b58355c98", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:45:45.323536 containerd[1819]: 2025-05-17 01:45:45.303 [INFO][7504] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f" May 17 01:45:45.323536 containerd[1819]: 2025-05-17 01:45:45.303 [INFO][7504] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f" iface="eth0" netns="" May 17 01:45:45.323536 containerd[1819]: 2025-05-17 01:45:45.303 [INFO][7504] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f" May 17 01:45:45.323536 containerd[1819]: 2025-05-17 01:45:45.303 [INFO][7504] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f" May 17 01:45:45.323536 containerd[1819]: 2025-05-17 01:45:45.315 [INFO][7519] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f" HandleID="k8s-pod-network.ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f" Workload="ci--4081.3.3--n--d569167b40-k8s-goldmane--78d55f7ddc--zwf9c-eth0" May 17 01:45:45.323536 containerd[1819]: 2025-05-17 01:45:45.315 [INFO][7519] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:45:45.323536 containerd[1819]: 2025-05-17 01:45:45.315 [INFO][7519] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:45:45.323536 containerd[1819]: 2025-05-17 01:45:45.320 [WARNING][7519] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f" HandleID="k8s-pod-network.ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f" Workload="ci--4081.3.3--n--d569167b40-k8s-goldmane--78d55f7ddc--zwf9c-eth0" May 17 01:45:45.323536 containerd[1819]: 2025-05-17 01:45:45.320 [INFO][7519] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f" HandleID="k8s-pod-network.ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f" Workload="ci--4081.3.3--n--d569167b40-k8s-goldmane--78d55f7ddc--zwf9c-eth0" May 17 01:45:45.323536 containerd[1819]: 2025-05-17 01:45:45.321 [INFO][7519] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:45:45.323536 containerd[1819]: 2025-05-17 01:45:45.322 [INFO][7504] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f" May 17 01:45:45.324122 containerd[1819]: time="2025-05-17T01:45:45.323567330Z" level=info msg="TearDown network for sandbox \"ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f\" successfully" May 17 01:45:45.329670 containerd[1819]: time="2025-05-17T01:45:45.329167862Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 01:45:45.329670 containerd[1819]: time="2025-05-17T01:45:45.329245964Z" level=info msg="RemovePodSandbox \"ae414c7c09e35067cc042c0f775359ba7f1d569a716362066a166bd6c056104f\" returns successfully" May 17 01:45:45.330001 containerd[1819]: time="2025-05-17T01:45:45.329989024Z" level=info msg="StopPodSandbox for \"df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61\"" May 17 01:45:45.364544 containerd[1819]: 2025-05-17 01:45:45.347 [WARNING][7544] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--6x6sv-eth0", GenerateName:"calico-apiserver-bf55ffd57-", Namespace:"calico-apiserver", SelfLink:"", UID:"a83cb314-94c7-48be-9000-43244ee2be0f", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 44, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bf55ffd57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-d569167b40", ContainerID:"01158141b50e961138d122298ce8ef138387a57d451a941ee2466e2d732f2bae", Pod:"calico-apiserver-bf55ffd57-6x6sv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif93cbfd5167", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:45:45.364544 containerd[1819]: 2025-05-17 01:45:45.347 [INFO][7544] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61" May 17 01:45:45.364544 containerd[1819]: 2025-05-17 01:45:45.347 [INFO][7544] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61" iface="eth0" netns="" May 17 01:45:45.364544 containerd[1819]: 2025-05-17 01:45:45.347 [INFO][7544] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61" May 17 01:45:45.364544 containerd[1819]: 2025-05-17 01:45:45.347 [INFO][7544] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61" May 17 01:45:45.364544 containerd[1819]: 2025-05-17 01:45:45.357 [INFO][7559] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61" HandleID="k8s-pod-network.df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61" Workload="ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--6x6sv-eth0" May 17 01:45:45.364544 containerd[1819]: 2025-05-17 01:45:45.357 [INFO][7559] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:45:45.364544 containerd[1819]: 2025-05-17 01:45:45.357 [INFO][7559] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:45:45.364544 containerd[1819]: 2025-05-17 01:45:45.361 [WARNING][7559] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61" HandleID="k8s-pod-network.df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61" Workload="ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--6x6sv-eth0" May 17 01:45:45.364544 containerd[1819]: 2025-05-17 01:45:45.361 [INFO][7559] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61" HandleID="k8s-pod-network.df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61" Workload="ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--6x6sv-eth0" May 17 01:45:45.364544 containerd[1819]: 2025-05-17 01:45:45.363 [INFO][7559] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:45:45.364544 containerd[1819]: 2025-05-17 01:45:45.363 [INFO][7544] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61" May 17 01:45:45.364971 containerd[1819]: time="2025-05-17T01:45:45.364554310Z" level=info msg="TearDown network for sandbox \"df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61\" successfully" May 17 01:45:45.364971 containerd[1819]: time="2025-05-17T01:45:45.364573827Z" level=info msg="StopPodSandbox for \"df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61\" returns successfully" May 17 01:45:45.364971 containerd[1819]: time="2025-05-17T01:45:45.364868197Z" level=info msg="RemovePodSandbox for \"df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61\"" May 17 01:45:45.364971 containerd[1819]: time="2025-05-17T01:45:45.364890413Z" level=info msg="Forcibly stopping sandbox \"df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61\"" May 17 01:45:45.440182 containerd[1819]: 2025-05-17 01:45:45.389 [WARNING][7582] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--6x6sv-eth0", GenerateName:"calico-apiserver-bf55ffd57-", Namespace:"calico-apiserver", SelfLink:"", UID:"a83cb314-94c7-48be-9000-43244ee2be0f", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 44, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bf55ffd57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-n-d569167b40", ContainerID:"01158141b50e961138d122298ce8ef138387a57d451a941ee2466e2d732f2bae", Pod:"calico-apiserver-bf55ffd57-6x6sv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif93cbfd5167", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:45:45.440182 containerd[1819]: 2025-05-17 01:45:45.390 [INFO][7582] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61" May 17 01:45:45.440182 containerd[1819]: 2025-05-17 01:45:45.390 [INFO][7582] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61" iface="eth0" netns="" May 17 01:45:45.440182 containerd[1819]: 2025-05-17 01:45:45.390 [INFO][7582] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61" May 17 01:45:45.440182 containerd[1819]: 2025-05-17 01:45:45.390 [INFO][7582] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61" May 17 01:45:45.440182 containerd[1819]: 2025-05-17 01:45:45.429 [INFO][7599] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61" HandleID="k8s-pod-network.df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61" Workload="ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--6x6sv-eth0" May 17 01:45:45.440182 containerd[1819]: 2025-05-17 01:45:45.429 [INFO][7599] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:45:45.440182 containerd[1819]: 2025-05-17 01:45:45.429 [INFO][7599] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:45:45.440182 containerd[1819]: 2025-05-17 01:45:45.436 [WARNING][7599] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61" HandleID="k8s-pod-network.df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61" Workload="ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--6x6sv-eth0" May 17 01:45:45.440182 containerd[1819]: 2025-05-17 01:45:45.436 [INFO][7599] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61" HandleID="k8s-pod-network.df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61" Workload="ci--4081.3.3--n--d569167b40-k8s-calico--apiserver--bf55ffd57--6x6sv-eth0" May 17 01:45:45.440182 containerd[1819]: 2025-05-17 01:45:45.437 [INFO][7599] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:45:45.440182 containerd[1819]: 2025-05-17 01:45:45.439 [INFO][7582] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61" May 17 01:45:45.440903 containerd[1819]: time="2025-05-17T01:45:45.440212852Z" level=info msg="TearDown network for sandbox \"df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61\" successfully" May 17 01:45:45.442265 containerd[1819]: time="2025-05-17T01:45:45.442221564Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 01:45:45.442265 containerd[1819]: time="2025-05-17T01:45:45.442256788Z" level=info msg="RemovePodSandbox \"df41d7af42f25cbca9d6dc4cc73d244c4b2be108e3998e14c7cd7e1a91374c61\" returns successfully" May 17 01:45:45.698153 containerd[1819]: time="2025-05-17T01:45:45.698060282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 01:45:46.029773 containerd[1819]: time="2025-05-17T01:45:46.029496204Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 01:45:46.050467 containerd[1819]: time="2025-05-17T01:45:46.050432278Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 01:45:46.050554 containerd[1819]: time="2025-05-17T01:45:46.050496230Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 01:45:46.050634 kubelet[3068]: E0517 01:45:46.050573 3068 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 01:45:46.050634 kubelet[3068]: E0517 01:45:46.050605 3068 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 01:45:46.050863 kubelet[3068]: E0517 01:45:46.050682 3068 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z97mm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-zwf9c_calico-system(4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 01:45:46.051958 kubelet[3068]: E0517 01:45:46.051913 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-zwf9c" podUID="4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8" May 17 01:45:46.574578 systemd[1]: Started sshd@11-145.40.90.165:22-187.16.96.250:40440.service - OpenSSH per-connection server daemon (187.16.96.250:40440). May 17 01:45:47.698442 kubelet[3068]: E0517 01:45:47.698265 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-757f968694-hv974" podUID="b211c981-6173-4ca8-aa53-cf31a5319b90" May 17 01:45:47.825967 sshd[7617]: Received disconnect from 187.16.96.250 port 40440:11: Bye Bye [preauth] May 17 01:45:47.825967 sshd[7617]: Disconnected from authenticating user root 187.16.96.250 port 40440 [preauth] May 17 01:45:47.841354 systemd[1]: sshd@11-145.40.90.165:22-187.16.96.250:40440.service: Deactivated successfully. May 17 01:45:52.507472 kubelet[3068]: I0517 01:45:52.507406 3068 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 01:45:57.383794 kubelet[3068]: I0517 01:45:57.383772 3068 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 01:45:57.696627 kubelet[3068]: E0517 01:45:57.696410 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-zwf9c" podUID="4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8" May 17 01:46:02.696841 containerd[1819]: time="2025-05-17T01:46:02.696768988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 01:46:03.036207 containerd[1819]: time="2025-05-17T01:46:03.035955058Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 01:46:03.036844 containerd[1819]: time="2025-05-17T01:46:03.036771607Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 01:46:03.036882 containerd[1819]: time="2025-05-17T01:46:03.036841625Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 01:46:03.036947 kubelet[3068]: E0517 01:46:03.036923 3068 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 01:46:03.037157 kubelet[3068]: E0517 01:46:03.036957 3068 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 01:46:03.037157 kubelet[3068]: E0517 01:46:03.037020 3068 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:92a8012b5750456bb9056172472acd21,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-md4bp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-757f968694-hv974_calico-system(b211c981-6173-4ca8-aa53-cf31a5319b90): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 01:46:03.038988 containerd[1819]: time="2025-05-17T01:46:03.038974719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 01:46:03.357541 containerd[1819]: time="2025-05-17T01:46:03.357412770Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 01:46:03.358474 containerd[1819]: time="2025-05-17T01:46:03.358375943Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 01:46:03.358474 containerd[1819]: time="2025-05-17T01:46:03.358453487Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 01:46:03.358612 kubelet[3068]: E0517 01:46:03.358557 3068 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 01:46:03.358612 kubelet[3068]: E0517 01:46:03.358589 3068 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 01:46:03.358715 kubelet[3068]: E0517 01:46:03.358659 3068 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-md4bp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-757f968694-hv974_calico-system(b211c981-6173-4ca8-aa53-cf31a5319b90): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 01:46:03.359833 kubelet[3068]: E0517 01:46:03.359787 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-757f968694-hv974" podUID="b211c981-6173-4ca8-aa53-cf31a5319b90" May 17 01:46:11.695644 containerd[1819]: time="2025-05-17T01:46:11.695622952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 01:46:11.986907 containerd[1819]: time="2025-05-17T01:46:11.986636241Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 01:46:11.987611 containerd[1819]: time="2025-05-17T01:46:11.987545111Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 01:46:11.987663 containerd[1819]: time="2025-05-17T01:46:11.987617358Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 01:46:11.987742 kubelet[3068]: E0517 01:46:11.987687 3068 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 01:46:11.987742 kubelet[3068]: E0517 01:46:11.987720 3068 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 01:46:11.987979 kubelet[3068]: E0517 01:46:11.987800 3068 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z97mm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-zwf9c_calico-system(4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 01:46:11.988965 kubelet[3068]: E0517 01:46:11.988947 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-zwf9c" podUID="4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8" May 17 01:46:17.698761 kubelet[3068]: E0517 01:46:17.698645 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-757f968694-hv974" podUID="b211c981-6173-4ca8-aa53-cf31a5319b90" May 17 01:46:22.695796 kubelet[3068]: E0517 01:46:22.695762 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-zwf9c" podUID="4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8" May 17 01:46:30.696619 kubelet[3068]: E0517 01:46:30.696577 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-757f968694-hv974" podUID="b211c981-6173-4ca8-aa53-cf31a5319b90" May 17 01:46:35.696457 kubelet[3068]: E0517 01:46:35.696410 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-zwf9c" podUID="4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8" May 17 01:46:45.698151 containerd[1819]: time="2025-05-17T01:46:45.698074347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 01:46:46.012629 containerd[1819]: time="2025-05-17T01:46:46.012348426Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 01:46:46.013357 containerd[1819]: time="2025-05-17T01:46:46.013303606Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 01:46:46.013419 containerd[1819]: time="2025-05-17T01:46:46.013365631Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 01:46:46.013546 kubelet[3068]: E0517 01:46:46.013484 3068 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 01:46:46.013546 kubelet[3068]: E0517 01:46:46.013527 3068 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 01:46:46.013827 kubelet[3068]: E0517 01:46:46.013605 3068 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:92a8012b5750456bb9056172472acd21,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-md4bp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-757f968694-hv974_calico-system(b211c981-6173-4ca8-aa53-cf31a5319b90): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 01:46:46.015278 containerd[1819]: time="2025-05-17T01:46:46.015258459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 01:46:46.332239 containerd[1819]: time="2025-05-17T01:46:46.332183435Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 01:46:46.332846 containerd[1819]: time="2025-05-17T01:46:46.332755706Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 01:46:46.332846 containerd[1819]: time="2025-05-17T01:46:46.332833997Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 01:46:46.333007 kubelet[3068]: E0517 01:46:46.332949 3068 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 01:46:46.333007 kubelet[3068]: E0517 01:46:46.332982 3068 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 01:46:46.333094 kubelet[3068]: E0517 01:46:46.333046 3068 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-md4bp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-757f968694-hv974_calico-system(b211c981-6173-4ca8-aa53-cf31a5319b90): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 01:46:46.334426 kubelet[3068]: E0517 01:46:46.334352 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-757f968694-hv974" podUID="b211c981-6173-4ca8-aa53-cf31a5319b90" May 17 01:46:48.696723 kubelet[3068]: E0517 01:46:48.696652 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-zwf9c" podUID="4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8" May 17 01:46:57.184583 systemd[1]: Started sshd@12-145.40.90.165:22-218.92.0.157:59686.service - OpenSSH per-connection server daemon (218.92.0.157:59686). May 17 01:46:58.678903 sshd[7822]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root May 17 01:47:00.698641 kubelet[3068]: E0517 01:47:00.698512 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-757f968694-hv974" podUID="b211c981-6173-4ca8-aa53-cf31a5319b90" May 17 01:47:00.836695 sshd[7820]: PAM: Permission denied for root from 218.92.0.157 May 17 01:47:01.138104 sshd[7842]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root May 17 01:47:02.697555 containerd[1819]: time="2025-05-17T01:47:02.697446892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 01:47:02.998198 containerd[1819]: time="2025-05-17T01:47:02.997918218Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 01:47:02.999116 containerd[1819]: time="2025-05-17T01:47:02.999034436Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 01:47:02.999116 containerd[1819]: time="2025-05-17T01:47:02.999106177Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 01:47:02.999342 kubelet[3068]: E0517 01:47:02.999267 3068 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 01:47:02.999342 kubelet[3068]: E0517 01:47:02.999323 3068 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 01:47:02.999588 kubelet[3068]: E0517 01:47:02.999400 3068 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z97mm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-zwf9c_calico-system(4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 01:47:03.000600 kubelet[3068]: E0517 01:47:03.000553 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-zwf9c" podUID="4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8" May 17 01:47:03.571640 sshd[7820]: PAM: Permission denied for root from 218.92.0.157 May 17 01:47:03.872456 sshd[7843]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root May 17 01:47:05.718580 sshd[7820]: PAM: Permission denied for root from 218.92.0.157 May 17 01:47:05.868451 sshd[7820]: Received disconnect from 218.92.0.157 port 59686:11: [preauth] May 17 01:47:05.868451 sshd[7820]: Disconnected from authenticating user root 218.92.0.157 port 59686 [preauth] May 17 01:47:05.872084 systemd[1]: sshd@12-145.40.90.165:22-218.92.0.157:59686.service: Deactivated successfully. May 17 01:47:12.696410 kubelet[3068]: E0517 01:47:12.696281 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-757f968694-hv974" podUID="b211c981-6173-4ca8-aa53-cf31a5319b90" May 17 01:47:15.697326 kubelet[3068]: E0517 01:47:15.697176 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-zwf9c" podUID="4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8" May 17 01:47:25.695957 kubelet[3068]: E0517 01:47:25.695908 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-757f968694-hv974" podUID="b211c981-6173-4ca8-aa53-cf31a5319b90" May 17 01:47:29.697893 kubelet[3068]: E0517 01:47:29.697743 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-zwf9c" podUID="4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8" May 17 01:47:39.697178 kubelet[3068]: E0517 01:47:39.697085 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-757f968694-hv974" podUID="b211c981-6173-4ca8-aa53-cf31a5319b90" May 17 01:47:43.698008 kubelet[3068]: E0517 01:47:43.697855 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-zwf9c" podUID="4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8" May 17 01:47:45.655479 systemd[1]: Started sshd@13-145.40.90.165:22-69.48.204.173:42252.service - OpenSSH per-connection server daemon (69.48.204.173:42252). May 17 01:47:45.957074 sshd[7946]: Invalid user vhserver3 from 69.48.204.173 port 42252 May 17 01:47:46.017580 sshd[7946]: Received disconnect from 69.48.204.173 port 42252:11: Bye Bye [preauth] May 17 01:47:46.017580 sshd[7946]: Disconnected from invalid user vhserver3 69.48.204.173 port 42252 [preauth] May 17 01:47:46.020839 systemd[1]: sshd@13-145.40.90.165:22-69.48.204.173:42252.service: Deactivated successfully. May 17 01:47:53.697304 kubelet[3068]: E0517 01:47:53.697221 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-757f968694-hv974" podUID="b211c981-6173-4ca8-aa53-cf31a5319b90" May 17 01:47:58.697295 kubelet[3068]: E0517 01:47:58.697168 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-zwf9c" podUID="4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8" May 17 01:48:05.695973 kubelet[3068]: E0517 01:48:05.695917 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-757f968694-hv974" podUID="b211c981-6173-4ca8-aa53-cf31a5319b90" May 17 01:48:11.695636 kubelet[3068]: E0517 01:48:11.695578 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-zwf9c" podUID="4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8" May 17 01:48:17.697615 containerd[1819]: time="2025-05-17T01:48:17.697527985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 01:48:18.006628 containerd[1819]: time="2025-05-17T01:48:18.006533177Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 01:48:18.007095 containerd[1819]: time="2025-05-17T01:48:18.007013435Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 01:48:18.007095 containerd[1819]: time="2025-05-17T01:48:18.007058911Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 01:48:18.007175 kubelet[3068]: E0517 01:48:18.007132 3068 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 01:48:18.007375 kubelet[3068]: E0517 01:48:18.007182 3068 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 01:48:18.007375 kubelet[3068]: E0517 01:48:18.007246 3068 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:92a8012b5750456bb9056172472acd21,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-md4bp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-757f968694-hv974_calico-system(b211c981-6173-4ca8-aa53-cf31a5319b90): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 01:48:18.008787 containerd[1819]: time="2025-05-17T01:48:18.008775809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 01:48:18.318168 containerd[1819]: time="2025-05-17T01:48:18.317901414Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 01:48:18.318802 containerd[1819]: time="2025-05-17T01:48:18.318759467Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 01:48:18.318882 containerd[1819]: time="2025-05-17T01:48:18.318803042Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 01:48:18.319017 kubelet[3068]: E0517 01:48:18.318945 3068 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 01:48:18.319017 kubelet[3068]: E0517 01:48:18.318990 3068 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 01:48:18.319101 kubelet[3068]: E0517 01:48:18.319079 3068 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-md4bp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-757f968694-hv974_calico-system(b211c981-6173-4ca8-aa53-cf31a5319b90): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 01:48:18.320295 kubelet[3068]: E0517 01:48:18.320246 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-757f968694-hv974" podUID="b211c981-6173-4ca8-aa53-cf31a5319b90" May 17 01:48:24.696788 containerd[1819]: time="2025-05-17T01:48:24.696741578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 01:48:25.033550 containerd[1819]: time="2025-05-17T01:48:25.033243935Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 01:48:25.034279 containerd[1819]: time="2025-05-17T01:48:25.034179011Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 01:48:25.034373 containerd[1819]: time="2025-05-17T01:48:25.034246268Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 01:48:25.034453 kubelet[3068]: E0517 01:48:25.034400 3068 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 01:48:25.034453 kubelet[3068]: E0517 01:48:25.034434 3068 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 01:48:25.034660 kubelet[3068]: E0517 01:48:25.034512 3068 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z97mm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-zwf9c_calico-system(4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 01:48:25.035673 kubelet[3068]: E0517 01:48:25.035630 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-zwf9c" podUID="4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8" May 17 01:48:32.698992 kubelet[3068]: E0517 01:48:32.698868 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-757f968694-hv974" podUID="b211c981-6173-4ca8-aa53-cf31a5319b90" May 17 01:48:38.695894 kubelet[3068]: E0517 01:48:38.695838 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-zwf9c" podUID="4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8" May 17 01:48:47.696476 kubelet[3068]: E0517 01:48:47.696387 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-757f968694-hv974" podUID="b211c981-6173-4ca8-aa53-cf31a5319b90" May 17 01:48:50.696559 kubelet[3068]: E0517 01:48:50.696466 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-zwf9c" podUID="4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8" May 17 01:48:53.099666 systemd[1]: Started sshd@14-145.40.90.165:22-218.92.0.157:19202.service - OpenSSH per-connection server daemon (218.92.0.157:19202). May 17 01:48:54.135875 sshd[8146]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root May 17 01:48:56.553992 sshd[8144]: PAM: Permission denied for root from 218.92.0.157 May 17 01:48:56.824053 sshd[8147]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root May 17 01:48:58.515388 sshd[8144]: PAM: Permission denied for root from 218.92.0.157 May 17 01:48:58.787707 sshd[8148]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root May 17 01:48:59.698831 kubelet[3068]: E0517 01:48:59.698642 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-757f968694-hv974" podUID="b211c981-6173-4ca8-aa53-cf31a5319b90" May 17 01:49:00.418641 sshd[8144]: PAM: Permission denied for root from 218.92.0.157 May 17 01:49:00.553301 sshd[8144]: Received disconnect from 218.92.0.157 port 19202:11: [preauth] May 17 01:49:00.553301 sshd[8144]: Disconnected from authenticating user root 218.92.0.157 port 19202 [preauth] May 17 01:49:00.554993 systemd[1]: sshd@14-145.40.90.165:22-218.92.0.157:19202.service: Deactivated successfully. May 17 01:49:04.696768 kubelet[3068]: E0517 01:49:04.696684 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-zwf9c" podUID="4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8" May 17 01:49:12.696917 kubelet[3068]: E0517 01:49:12.696838 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-757f968694-hv974" podUID="b211c981-6173-4ca8-aa53-cf31a5319b90" May 17 01:49:16.697133 kubelet[3068]: E0517 01:49:16.696996 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-zwf9c" podUID="4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8" May 17 01:49:24.696385 kubelet[3068]: E0517 01:49:24.696361 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-757f968694-hv974" podUID="b211c981-6173-4ca8-aa53-cf31a5319b90" May 17 01:49:27.696738 kubelet[3068]: E0517 01:49:27.696661 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-zwf9c" podUID="4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8" May 17 01:49:36.165071 systemd[1]: Started sshd@15-145.40.90.165:22-94.102.4.12:41580.service - OpenSSH per-connection server daemon (94.102.4.12:41580). May 17 01:49:36.696778 kubelet[3068]: E0517 01:49:36.696678 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-757f968694-hv974" podUID="b211c981-6173-4ca8-aa53-cf31a5319b90" May 17 01:49:37.410368 sshd[8246]: Invalid user amits from 94.102.4.12 port 41580 May 17 01:49:37.619307 sshd[8246]: Received disconnect from 94.102.4.12 port 41580:11: Bye Bye [preauth] May 17 01:49:37.619307 sshd[8246]: Disconnected from invalid user amits 94.102.4.12 port 41580 [preauth] May 17 01:49:37.624324 systemd[1]: sshd@15-145.40.90.165:22-94.102.4.12:41580.service: Deactivated successfully. May 17 01:49:41.697239 kubelet[3068]: E0517 01:49:41.697194 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-zwf9c" podUID="4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8" May 17 01:49:47.696231 kubelet[3068]: E0517 01:49:47.696143 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-757f968694-hv974" podUID="b211c981-6173-4ca8-aa53-cf31a5319b90" May 17 01:49:56.695972 kubelet[3068]: E0517 01:49:56.695941 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-zwf9c" podUID="4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8" May 17 01:50:00.696603 kubelet[3068]: E0517 01:50:00.696556 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-757f968694-hv974" podUID="b211c981-6173-4ca8-aa53-cf31a5319b90" May 17 01:50:11.697077 kubelet[3068]: E0517 01:50:11.696841 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-zwf9c" podUID="4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8" May 17 01:50:12.696676 kubelet[3068]: E0517 01:50:12.696593 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-757f968694-hv974" podUID="b211c981-6173-4ca8-aa53-cf31a5319b90" May 17 01:50:21.279653 update_engine[1805]: I20250517 01:50:21.279522 1805 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 17 01:50:21.279653 update_engine[1805]: I20250517 01:50:21.279618 1805 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 17 01:50:21.280730 update_engine[1805]: I20250517 01:50:21.279999 1805 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 17 01:50:21.281104 update_engine[1805]: I20250517 01:50:21.281013 1805 omaha_request_params.cc:62] Current group set to lts May 17 01:50:21.281310 update_engine[1805]: I20250517 01:50:21.281256 1805 update_attempter.cc:499] Already updated boot flags. Skipping. May 17 01:50:21.281460 update_engine[1805]: I20250517 01:50:21.281304 1805 update_attempter.cc:643] Scheduling an action processor start. May 17 01:50:21.281460 update_engine[1805]: I20250517 01:50:21.281347 1805 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 17 01:50:21.281460 update_engine[1805]: I20250517 01:50:21.281419 1805 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 17 01:50:21.281719 update_engine[1805]: I20250517 01:50:21.281579 1805 omaha_request_action.cc:271] Posting an Omaha request to disabled May 17 01:50:21.281719 update_engine[1805]: I20250517 01:50:21.281608 1805 omaha_request_action.cc:272] Request: May 17 01:50:21.281719 update_engine[1805]: May 17 01:50:21.281719 update_engine[1805]: May 17 01:50:21.281719 update_engine[1805]: May 17 01:50:21.281719 update_engine[1805]: May 17 01:50:21.281719 update_engine[1805]: May 17 01:50:21.281719 update_engine[1805]: May 17 01:50:21.281719 update_engine[1805]: May 17 01:50:21.281719 update_engine[1805]: May 17 01:50:21.281719 update_engine[1805]: I20250517 01:50:21.281625 1805 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 01:50:21.282652 locksmithd[1854]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 17 01:50:21.284877 update_engine[1805]: I20250517 01:50:21.284838 1805 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 01:50:21.285033 update_engine[1805]: I20250517 01:50:21.284993 1805 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 01:50:21.285704 update_engine[1805]: E20250517 01:50:21.285661 1805 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 01:50:21.285704 update_engine[1805]: I20250517 01:50:21.285693 1805 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 17 01:50:25.697314 kubelet[3068]: E0517 01:50:25.697162 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-zwf9c" podUID="4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8" May 17 01:50:25.698391 kubelet[3068]: E0517 01:50:25.698081 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-757f968694-hv974" podUID="b211c981-6173-4ca8-aa53-cf31a5319b90" May 17 01:50:31.189876 update_engine[1805]: I20250517 01:50:31.189698 1805 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 01:50:31.190977 update_engine[1805]: I20250517 01:50:31.190258 1805 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 01:50:31.190977 update_engine[1805]: I20250517 01:50:31.190829 1805 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 01:50:31.191890 update_engine[1805]: E20250517 01:50:31.191764 1805 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 01:50:31.192076 update_engine[1805]: I20250517 01:50:31.191910 1805 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 17 01:50:36.698344 kubelet[3068]: E0517 01:50:36.698181 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-757f968694-hv974" podUID="b211c981-6173-4ca8-aa53-cf31a5319b90" May 17 01:50:36.774153 systemd[1]: Started sshd@16-145.40.90.165:22-187.16.96.250:55172.service - OpenSSH per-connection server daemon (187.16.96.250:55172). May 17 01:50:37.802970 sshd[8403]: Invalid user tiptop from 187.16.96.250 port 55172 May 17 01:50:38.003370 sshd[8403]: Received disconnect from 187.16.96.250 port 55172:11: Bye Bye [preauth] May 17 01:50:38.003370 sshd[8403]: Disconnected from invalid user tiptop 187.16.96.250 port 55172 [preauth] May 17 01:50:38.006693 systemd[1]: sshd@16-145.40.90.165:22-187.16.96.250:55172.service: Deactivated successfully. May 17 01:50:39.696687 kubelet[3068]: E0517 01:50:39.696591 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-zwf9c" podUID="4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8" May 17 01:50:41.189553 update_engine[1805]: I20250517 01:50:41.189391 1805 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 01:50:41.190438 update_engine[1805]: I20250517 01:50:41.189953 1805 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 01:50:41.190560 update_engine[1805]: I20250517 01:50:41.190516 1805 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 01:50:41.191443 update_engine[1805]: E20250517 01:50:41.191327 1805 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 01:50:41.191631 update_engine[1805]: I20250517 01:50:41.191473 1805 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 17 01:50:50.698045 kubelet[3068]: E0517 01:50:50.697938 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-757f968694-hv974" podUID="b211c981-6173-4ca8-aa53-cf31a5319b90" May 17 01:50:51.189517 update_engine[1805]: I20250517 01:50:51.189359 1805 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 01:50:51.190364 update_engine[1805]: I20250517 01:50:51.189947 1805 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 01:50:51.190604 update_engine[1805]: I20250517 01:50:51.190512 1805 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 01:50:51.191368 update_engine[1805]: E20250517 01:50:51.191221 1805 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 01:50:51.191587 update_engine[1805]: I20250517 01:50:51.191391 1805 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 17 01:50:51.191587 update_engine[1805]: I20250517 01:50:51.191423 1805 omaha_request_action.cc:617] Omaha request response: May 17 01:50:51.191786 update_engine[1805]: E20250517 01:50:51.191584 1805 omaha_request_action.cc:636] Omaha request network transfer failed. May 17 01:50:51.191786 update_engine[1805]: I20250517 01:50:51.191634 1805 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 17 01:50:51.191786 update_engine[1805]: I20250517 01:50:51.191651 1805 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 17 01:50:51.191786 update_engine[1805]: I20250517 01:50:51.191665 1805 update_attempter.cc:306] Processing Done. May 17 01:50:51.191786 update_engine[1805]: E20250517 01:50:51.191696 1805 update_attempter.cc:619] Update failed. May 17 01:50:51.191786 update_engine[1805]: I20250517 01:50:51.191713 1805 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 17 01:50:51.191786 update_engine[1805]: I20250517 01:50:51.191728 1805 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 17 01:50:51.191786 update_engine[1805]: I20250517 01:50:51.191743 1805 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 17 01:50:51.192579 update_engine[1805]: I20250517 01:50:51.191897 1805 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 17 01:50:51.192579 update_engine[1805]: I20250517 01:50:51.191963 1805 omaha_request_action.cc:271] Posting an Omaha request to disabled May 17 01:50:51.192579 update_engine[1805]: I20250517 01:50:51.191983 1805 omaha_request_action.cc:272] Request: May 17 01:50:51.192579 update_engine[1805]: May 17 01:50:51.192579 update_engine[1805]: May 17 01:50:51.192579 update_engine[1805]: May 17 01:50:51.192579 update_engine[1805]: May 17 01:50:51.192579 update_engine[1805]: May 17 01:50:51.192579 update_engine[1805]: May 17 01:50:51.192579 update_engine[1805]: I20250517 01:50:51.191999 1805 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 01:50:51.192579 update_engine[1805]: I20250517 01:50:51.192425 1805 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 01:50:51.193598 locksmithd[1854]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 17 01:50:51.194230 update_engine[1805]: I20250517 01:50:51.192833 1805 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 01:50:51.194230 update_engine[1805]: E20250517 01:50:51.193665 1805 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 01:50:51.194230 update_engine[1805]: I20250517 01:50:51.193791 1805 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 17 01:50:51.194230 update_engine[1805]: I20250517 01:50:51.193819 1805 omaha_request_action.cc:617] Omaha request response: May 17 01:50:51.194230 update_engine[1805]: I20250517 01:50:51.193837 1805 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 17 01:50:51.194230 update_engine[1805]: I20250517 01:50:51.193852 1805 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 17 01:50:51.194230 update_engine[1805]: I20250517 01:50:51.193868 1805 update_attempter.cc:306] Processing Done. May 17 01:50:51.194230 update_engine[1805]: I20250517 01:50:51.193884 1805 update_attempter.cc:310] Error event sent. May 17 01:50:51.194230 update_engine[1805]: I20250517 01:50:51.193908 1805 update_check_scheduler.cc:74] Next update check in 43m45s May 17 01:50:51.195027 locksmithd[1854]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 17 01:50:52.695995 kubelet[3068]: E0517 01:50:52.695951 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-zwf9c" podUID="4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8" May 17 01:50:53.257177 systemd[1]: Started sshd@17-145.40.90.165:22-218.92.0.157:32683.service - OpenSSH per-connection server daemon (218.92.0.157:32683). May 17 01:50:59.121101 sshd[8456]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root May 17 01:51:01.228053 sshd[8454]: PAM: Permission denied for root from 218.92.0.157 May 17 01:51:01.856849 sshd[8475]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root May 17 01:51:01.935849 systemd[1]: Started sshd@18-145.40.90.165:22-147.75.109.163:46800.service - OpenSSH per-connection server daemon (147.75.109.163:46800). May 17 01:51:02.014725 sshd[8480]: Accepted publickey for core from 147.75.109.163 port 46800 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 01:51:02.016132 sshd[8480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 01:51:02.020701 systemd-logind[1800]: New session 12 of user core. May 17 01:51:02.037491 systemd[1]: Started session-12.scope - Session 12 of User core. May 17 01:51:02.185661 sshd[8480]: pam_unix(sshd:session): session closed for user core May 17 01:51:02.187690 systemd[1]: sshd@18-145.40.90.165:22-147.75.109.163:46800.service: Deactivated successfully. May 17 01:51:02.188876 systemd[1]: session-12.scope: Deactivated successfully. May 17 01:51:02.189823 systemd-logind[1800]: Session 12 logged out. Waiting for processes to exit. May 17 01:51:02.190575 systemd-logind[1800]: Removed session 12. May 17 01:51:03.696040 containerd[1819]: time="2025-05-17T01:51:03.695969535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 01:51:03.708029 sshd[8454]: PAM: Permission denied for root from 218.92.0.157 May 17 01:51:04.009262 containerd[1819]: time="2025-05-17T01:51:04.008984582Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 01:51:04.010080 containerd[1819]: time="2025-05-17T01:51:04.010004412Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 01:51:04.010080 containerd[1819]: time="2025-05-17T01:51:04.010061152Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 01:51:04.010221 kubelet[3068]: E0517 01:51:04.010168 3068 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 01:51:04.010517 kubelet[3068]: E0517 01:51:04.010230 3068 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 01:51:04.010517 kubelet[3068]: E0517 01:51:04.010392 3068 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:92a8012b5750456bb9056172472acd21,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-md4bp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-757f968694-hv974_calico-system(b211c981-6173-4ca8-aa53-cf31a5319b90): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 01:51:04.012061 containerd[1819]: time="2025-05-17T01:51:04.012008747Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 01:51:04.316989 containerd[1819]: time="2025-05-17T01:51:04.316711869Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 01:51:04.317614 containerd[1819]: time="2025-05-17T01:51:04.317523138Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 01:51:04.317614 containerd[1819]: time="2025-05-17T01:51:04.317595495Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 01:51:04.317765 kubelet[3068]: E0517 01:51:04.317667 3068 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 01:51:04.317765 kubelet[3068]: E0517 01:51:04.317697 3068 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 01:51:04.317859 kubelet[3068]: E0517 01:51:04.317805 3068 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-md4bp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-757f968694-hv974_calico-system(b211c981-6173-4ca8-aa53-cf31a5319b90): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 01:51:04.319135 kubelet[3068]: E0517 01:51:04.319086 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-757f968694-hv974" podUID="b211c981-6173-4ca8-aa53-cf31a5319b90" May 17 01:51:04.326816 sshd[8511]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root May 17 01:51:04.695668 kubelet[3068]: E0517 01:51:04.695618 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-zwf9c" podUID="4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8" May 17 01:51:06.453383 sshd[8454]: PAM: Permission denied for root from 218.92.0.157 May 17 01:51:06.588776 sshd[8454]: Received disconnect from 218.92.0.157 port 32683:11: [preauth] May 17 01:51:06.588776 sshd[8454]: Disconnected from authenticating user root 218.92.0.157 port 32683 [preauth] May 17 01:51:06.589677 systemd[1]: sshd@17-145.40.90.165:22-218.92.0.157:32683.service: Deactivated successfully. May 17 01:51:07.201041 systemd[1]: Started sshd@19-145.40.90.165:22-147.75.109.163:46812.service - OpenSSH per-connection server daemon (147.75.109.163:46812). May 17 01:51:07.264365 sshd[8517]: Accepted publickey for core from 147.75.109.163 port 46812 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 01:51:07.265830 sshd[8517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 01:51:07.270710 systemd-logind[1800]: New session 13 of user core. May 17 01:51:07.288602 systemd[1]: Started session-13.scope - Session 13 of User core. May 17 01:51:07.416158 sshd[8517]: pam_unix(sshd:session): session closed for user core May 17 01:51:07.417835 systemd[1]: sshd@19-145.40.90.165:22-147.75.109.163:46812.service: Deactivated successfully. May 17 01:51:07.418783 systemd[1]: session-13.scope: Deactivated successfully. May 17 01:51:07.419444 systemd-logind[1800]: Session 13 logged out. Waiting for processes to exit. May 17 01:51:07.420041 systemd-logind[1800]: Removed session 13. May 17 01:51:12.447569 systemd[1]: Started sshd@20-145.40.90.165:22-147.75.109.163:46556.service - OpenSSH per-connection server daemon (147.75.109.163:46556). May 17 01:51:12.478142 sshd[8544]: Accepted publickey for core from 147.75.109.163 port 46556 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 01:51:12.478911 sshd[8544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 01:51:12.481909 systemd-logind[1800]: New session 14 of user core. May 17 01:51:12.498529 systemd[1]: Started session-14.scope - Session 14 of User core. May 17 01:51:12.585525 sshd[8544]: pam_unix(sshd:session): session closed for user core May 17 01:51:12.599160 systemd[1]: sshd@20-145.40.90.165:22-147.75.109.163:46556.service: Deactivated successfully. May 17 01:51:12.600020 systemd[1]: session-14.scope: Deactivated successfully. May 17 01:51:12.600774 systemd-logind[1800]: Session 14 logged out. Waiting for processes to exit. May 17 01:51:12.601550 systemd[1]: Started sshd@21-145.40.90.165:22-147.75.109.163:46564.service - OpenSSH per-connection server daemon (147.75.109.163:46564). May 17 01:51:12.602066 systemd-logind[1800]: Removed session 14. May 17 01:51:12.634622 sshd[8571]: Accepted publickey for core from 147.75.109.163 port 46564 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 01:51:12.635328 sshd[8571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 01:51:12.637918 systemd-logind[1800]: New session 15 of user core. May 17 01:51:12.649540 systemd[1]: Started session-15.scope - Session 15 of User core. May 17 01:51:12.764546 sshd[8571]: pam_unix(sshd:session): session closed for user core May 17 01:51:12.782964 systemd[1]: sshd@21-145.40.90.165:22-147.75.109.163:46564.service: Deactivated successfully. May 17 01:51:12.783847 systemd[1]: session-15.scope: Deactivated successfully. May 17 01:51:12.784598 systemd-logind[1800]: Session 15 logged out. Waiting for processes to exit. May 17 01:51:12.785286 systemd[1]: Started sshd@22-145.40.90.165:22-147.75.109.163:46576.service - OpenSSH per-connection server daemon (147.75.109.163:46576). May 17 01:51:12.785654 systemd-logind[1800]: Removed session 15. May 17 01:51:12.817957 sshd[8595]: Accepted publickey for core from 147.75.109.163 port 46576 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 01:51:12.818651 sshd[8595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 01:51:12.821204 systemd-logind[1800]: New session 16 of user core. May 17 01:51:12.842522 systemd[1]: Started session-16.scope - Session 16 of User core. May 17 01:51:12.988028 sshd[8595]: pam_unix(sshd:session): session closed for user core May 17 01:51:12.989686 systemd[1]: sshd@22-145.40.90.165:22-147.75.109.163:46576.service: Deactivated successfully. May 17 01:51:12.990618 systemd[1]: session-16.scope: Deactivated successfully. May 17 01:51:12.991226 systemd-logind[1800]: Session 16 logged out. Waiting for processes to exit. May 17 01:51:12.991834 systemd-logind[1800]: Removed session 16. May 17 01:51:15.698953 kubelet[3068]: E0517 01:51:15.698784 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-757f968694-hv974" podUID="b211c981-6173-4ca8-aa53-cf31a5319b90" May 17 01:51:16.697547 containerd[1819]: time="2025-05-17T01:51:16.697465227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 01:51:17.000124 containerd[1819]: time="2025-05-17T01:51:16.999858245Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 01:51:17.000885 containerd[1819]: time="2025-05-17T01:51:17.000858652Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 01:51:17.000961 containerd[1819]: time="2025-05-17T01:51:17.000938721Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 01:51:17.001073 kubelet[3068]: E0517 01:51:17.001045 3068 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 01:51:17.001259 kubelet[3068]: E0517 01:51:17.001082 3068 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 01:51:17.001259 kubelet[3068]: E0517 01:51:17.001166 3068 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z97mm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-zwf9c_calico-system(4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 01:51:17.002509 kubelet[3068]: E0517 01:51:17.002481 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-zwf9c" podUID="4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8" May 17 01:51:18.000310 systemd[1]: Started sshd@23-145.40.90.165:22-147.75.109.163:46588.service - OpenSSH per-connection server daemon (147.75.109.163:46588). May 17 01:51:18.029755 sshd[8670]: Accepted publickey for core from 147.75.109.163 port 46588 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 01:51:18.030421 sshd[8670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 01:51:18.032986 systemd-logind[1800]: New session 17 of user core. May 17 01:51:18.050535 systemd[1]: Started session-17.scope - Session 17 of User core. May 17 01:51:18.177483 sshd[8670]: pam_unix(sshd:session): session closed for user core May 17 01:51:18.179188 systemd[1]: sshd@23-145.40.90.165:22-147.75.109.163:46588.service: Deactivated successfully. May 17 01:51:18.180236 systemd[1]: session-17.scope: Deactivated successfully. May 17 01:51:18.181077 systemd-logind[1800]: Session 17 logged out. Waiting for processes to exit. May 17 01:51:18.181847 systemd-logind[1800]: Removed session 17. May 17 01:51:23.202520 systemd[1]: Started sshd@24-145.40.90.165:22-147.75.109.163:46916.service - OpenSSH per-connection server daemon (147.75.109.163:46916). May 17 01:51:23.232977 sshd[8707]: Accepted publickey for core from 147.75.109.163 port 46916 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 01:51:23.233677 sshd[8707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 01:51:23.236334 systemd-logind[1800]: New session 18 of user core. May 17 01:51:23.249519 systemd[1]: Started session-18.scope - Session 18 of User core. May 17 01:51:23.337330 sshd[8707]: pam_unix(sshd:session): session closed for user core May 17 01:51:23.338913 systemd[1]: sshd@24-145.40.90.165:22-147.75.109.163:46916.service: Deactivated successfully. May 17 01:51:23.339881 systemd[1]: session-18.scope: Deactivated successfully. May 17 01:51:23.340679 systemd-logind[1800]: Session 18 logged out. Waiting for processes to exit. May 17 01:51:23.341195 systemd-logind[1800]: Removed session 18. May 17 01:51:27.698358 kubelet[3068]: E0517 01:51:27.698205 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-757f968694-hv974" podUID="b211c981-6173-4ca8-aa53-cf31a5319b90" May 17 01:51:28.354177 systemd[1]: Started sshd@25-145.40.90.165:22-147.75.109.163:54762.service - OpenSSH per-connection server daemon (147.75.109.163:54762). May 17 01:51:28.386707 sshd[8734]: Accepted publickey for core from 147.75.109.163 port 54762 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 01:51:28.387441 sshd[8734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 01:51:28.390153 systemd-logind[1800]: New session 19 of user core. May 17 01:51:28.411490 systemd[1]: Started session-19.scope - Session 19 of User core. May 17 01:51:28.501081 sshd[8734]: pam_unix(sshd:session): session closed for user core May 17 01:51:28.502807 systemd[1]: sshd@25-145.40.90.165:22-147.75.109.163:54762.service: Deactivated successfully. May 17 01:51:28.503818 systemd[1]: session-19.scope: Deactivated successfully. May 17 01:51:28.504625 systemd-logind[1800]: Session 19 logged out. Waiting for processes to exit. May 17 01:51:28.505243 systemd-logind[1800]: Removed session 19. May 17 01:51:29.697250 kubelet[3068]: E0517 01:51:29.697144 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-zwf9c" podUID="4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8" May 17 01:51:33.535989 systemd[1]: Started sshd@26-145.40.90.165:22-147.75.109.163:54768.service - OpenSSH per-connection server daemon (147.75.109.163:54768). May 17 01:51:33.613795 sshd[8781]: Accepted publickey for core from 147.75.109.163 port 54768 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 01:51:33.614835 sshd[8781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 01:51:33.618369 systemd-logind[1800]: New session 20 of user core. May 17 01:51:33.631523 systemd[1]: Started session-20.scope - Session 20 of User core. May 17 01:51:33.720708 sshd[8781]: pam_unix(sshd:session): session closed for user core May 17 01:51:33.744490 systemd[1]: sshd@26-145.40.90.165:22-147.75.109.163:54768.service: Deactivated successfully. May 17 01:51:33.745574 systemd[1]: session-20.scope: Deactivated successfully. May 17 01:51:33.746527 systemd-logind[1800]: Session 20 logged out. Waiting for processes to exit. May 17 01:51:33.747378 systemd[1]: Started sshd@27-145.40.90.165:22-147.75.109.163:54772.service - OpenSSH per-connection server daemon (147.75.109.163:54772). May 17 01:51:33.748041 systemd-logind[1800]: Removed session 20. May 17 01:51:33.793151 sshd[8807]: Accepted publickey for core from 147.75.109.163 port 54772 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 01:51:33.794355 sshd[8807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 01:51:33.798304 systemd-logind[1800]: New session 21 of user core. May 17 01:51:33.807411 systemd[1]: Started session-21.scope - Session 21 of User core. May 17 01:51:33.950911 sshd[8807]: pam_unix(sshd:session): session closed for user core May 17 01:51:33.965122 systemd[1]: sshd@27-145.40.90.165:22-147.75.109.163:54772.service: Deactivated successfully. May 17 01:51:33.965965 systemd[1]: session-21.scope: Deactivated successfully. May 17 01:51:33.966679 systemd-logind[1800]: Session 21 logged out. Waiting for processes to exit. May 17 01:51:33.967374 systemd[1]: Started sshd@28-145.40.90.165:22-147.75.109.163:54774.service - OpenSSH per-connection server daemon (147.75.109.163:54774). May 17 01:51:33.967906 systemd-logind[1800]: Removed session 21. May 17 01:51:34.008425 sshd[8831]: Accepted publickey for core from 147.75.109.163 port 54774 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 01:51:34.009230 sshd[8831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 01:51:34.012255 systemd-logind[1800]: New session 22 of user core. May 17 01:51:34.032568 systemd[1]: Started session-22.scope - Session 22 of User core. May 17 01:51:34.898509 sshd[8831]: pam_unix(sshd:session): session closed for user core May 17 01:51:34.910302 systemd[1]: sshd@28-145.40.90.165:22-147.75.109.163:54774.service: Deactivated successfully. May 17 01:51:34.911281 systemd[1]: session-22.scope: Deactivated successfully. May 17 01:51:34.912136 systemd-logind[1800]: Session 22 logged out. Waiting for processes to exit. May 17 01:51:34.913099 systemd[1]: Started sshd@29-145.40.90.165:22-147.75.109.163:54782.service - OpenSSH per-connection server daemon (147.75.109.163:54782). May 17 01:51:34.913829 systemd-logind[1800]: Removed session 22. May 17 01:51:34.956427 sshd[8860]: Accepted publickey for core from 147.75.109.163 port 54782 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 01:51:34.957421 sshd[8860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 01:51:34.960880 systemd-logind[1800]: New session 23 of user core. May 17 01:51:34.978524 systemd[1]: Started session-23.scope - Session 23 of User core. May 17 01:51:35.157211 sshd[8860]: pam_unix(sshd:session): session closed for user core May 17 01:51:35.167019 systemd[1]: sshd@29-145.40.90.165:22-147.75.109.163:54782.service: Deactivated successfully. May 17 01:51:35.167851 systemd[1]: session-23.scope: Deactivated successfully. May 17 01:51:35.168546 systemd-logind[1800]: Session 23 logged out. Waiting for processes to exit. May 17 01:51:35.169227 systemd[1]: Started sshd@30-145.40.90.165:22-147.75.109.163:54798.service - OpenSSH per-connection server daemon (147.75.109.163:54798). May 17 01:51:35.169763 systemd-logind[1800]: Removed session 23. May 17 01:51:35.208748 sshd[8888]: Accepted publickey for core from 147.75.109.163 port 54798 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 01:51:35.209609 sshd[8888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 01:51:35.212684 systemd-logind[1800]: New session 24 of user core. May 17 01:51:35.222396 systemd[1]: Started session-24.scope - Session 24 of User core. May 17 01:51:35.358292 sshd[8888]: pam_unix(sshd:session): session closed for user core May 17 01:51:35.360513 systemd[1]: sshd@30-145.40.90.165:22-147.75.109.163:54798.service: Deactivated successfully. May 17 01:51:35.361847 systemd[1]: session-24.scope: Deactivated successfully. May 17 01:51:35.362888 systemd-logind[1800]: Session 24 logged out. Waiting for processes to exit. May 17 01:51:35.363827 systemd-logind[1800]: Removed session 24. May 17 01:51:40.386995 systemd[1]: Started sshd@31-145.40.90.165:22-147.75.109.163:38202.service - OpenSSH per-connection server daemon (147.75.109.163:38202). May 17 01:51:40.469941 sshd[8918]: Accepted publickey for core from 147.75.109.163 port 38202 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 01:51:40.470979 sshd[8918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 01:51:40.474480 systemd-logind[1800]: New session 25 of user core. May 17 01:51:40.496518 systemd[1]: Started session-25.scope - Session 25 of User core. May 17 01:51:40.630617 sshd[8918]: pam_unix(sshd:session): session closed for user core May 17 01:51:40.636493 systemd[1]: sshd@31-145.40.90.165:22-147.75.109.163:38202.service: Deactivated successfully. May 17 01:51:40.639448 systemd[1]: session-25.scope: Deactivated successfully. May 17 01:51:40.641127 systemd-logind[1800]: Session 25 logged out. Waiting for processes to exit. May 17 01:51:40.643139 systemd-logind[1800]: Removed session 25. May 17 01:51:40.698694 kubelet[3068]: E0517 01:51:40.698567 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-757f968694-hv974" podUID="b211c981-6173-4ca8-aa53-cf31a5319b90" May 17 01:51:43.697576 kubelet[3068]: E0517 01:51:43.697416 3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-zwf9c" podUID="4e9e9b39-38e8-4c49-8ba5-42ffe70ae6b8" May 17 01:51:45.661823 systemd[1]: Started sshd@32-145.40.90.165:22-147.75.109.163:38214.service - OpenSSH per-connection server daemon (147.75.109.163:38214). May 17 01:51:45.743563 sshd[8946]: Accepted publickey for core from 147.75.109.163 port 38214 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 01:51:45.746246 sshd[8946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 01:51:45.754854 systemd-logind[1800]: New session 26 of user core. May 17 01:51:45.776750 systemd[1]: Started session-26.scope - Session 26 of User core. May 17 01:51:45.869598 sshd[8946]: pam_unix(sshd:session): session closed for user core May 17 01:51:45.871227 systemd[1]: sshd@32-145.40.90.165:22-147.75.109.163:38214.service: Deactivated successfully. May 17 01:51:45.872183 systemd[1]: session-26.scope: Deactivated successfully. May 17 01:51:45.872959 systemd-logind[1800]: Session 26 logged out. Waiting for processes to exit. May 17 01:51:45.873705 systemd-logind[1800]: Removed session 26. May 17 01:51:50.889618 systemd[1]: Started sshd@33-145.40.90.165:22-147.75.109.163:50164.service - OpenSSH per-connection server daemon (147.75.109.163:50164). May 17 01:51:50.974173 sshd[9005]: Accepted publickey for core from 147.75.109.163 port 50164 ssh2: RSA SHA256:N15S3LouWRGcsZlxJhJA2950ZTC48/MIHHKc+9L5svE May 17 01:51:50.975252 sshd[9005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 01:51:50.979092 systemd-logind[1800]: New session 27 of user core. May 17 01:51:50.992535 systemd[1]: Started session-27.scope - Session 27 of User core. May 17 01:51:51.076726 sshd[9005]: pam_unix(sshd:session): session closed for user core May 17 01:51:51.078702 systemd[1]: sshd@33-145.40.90.165:22-147.75.109.163:50164.service: Deactivated successfully. May 17 01:51:51.079659 systemd[1]: session-27.scope: Deactivated successfully. May 17 01:51:51.080059 systemd-logind[1800]: Session 27 logged out. Waiting for processes to exit. May 17 01:51:51.080636 systemd-logind[1800]: Removed session 27.