Nov 1 01:56:04.565479 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Oct 31 23:02:53 -00 2025 Nov 1 01:56:04.565492 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 01:56:04.565499 kernel: BIOS-provided physical RAM map: Nov 1 01:56:04.565503 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Nov 1 01:56:04.565507 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Nov 1 01:56:04.565511 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Nov 1 01:56:04.565515 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Nov 1 01:56:04.565519 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Nov 1 01:56:04.565523 kernel: BIOS-e820: [mem 0x0000000040400000-0x00000000825bdfff] usable Nov 1 01:56:04.565527 kernel: BIOS-e820: [mem 0x00000000825be000-0x00000000825befff] ACPI NVS Nov 1 01:56:04.565532 kernel: BIOS-e820: [mem 0x00000000825bf000-0x00000000825bffff] reserved Nov 1 01:56:04.565535 kernel: BIOS-e820: [mem 0x00000000825c0000-0x000000008afcdfff] usable Nov 1 01:56:04.565539 kernel: BIOS-e820: [mem 0x000000008afce000-0x000000008c0b2fff] reserved Nov 1 01:56:04.565543 kernel: BIOS-e820: [mem 0x000000008c0b3000-0x000000008c23bfff] usable Nov 1 01:56:04.565548 kernel: BIOS-e820: [mem 0x000000008c23c000-0x000000008c66dfff] ACPI NVS Nov 1 01:56:04.565553 kernel: BIOS-e820: [mem 0x000000008c66e000-0x000000008eefefff] reserved Nov 1 01:56:04.565558 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Nov 1 01:56:04.565562 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Nov 1 01:56:04.565566 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 1 01:56:04.565570 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Nov 1 01:56:04.565574 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Nov 1 01:56:04.565579 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Nov 1 01:56:04.565583 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Nov 1 01:56:04.565587 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Nov 1 01:56:04.565591 kernel: NX (Execute Disable) protection: active Nov 1 01:56:04.565595 kernel: SMBIOS 3.2.1 present. Nov 1 01:56:04.565600 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 2.6 12/03/2024 Nov 1 01:56:04.565605 kernel: tsc: Detected 3400.000 MHz processor Nov 1 01:56:04.565609 kernel: tsc: Detected 3399.906 MHz TSC Nov 1 01:56:04.565613 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 01:56:04.565618 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 01:56:04.565623 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Nov 1 01:56:04.565627 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 01:56:04.565631 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Nov 1 01:56:04.565636 kernel: Using GB pages for direct mapping Nov 1 01:56:04.565640 kernel: ACPI: Early table checksum verification disabled Nov 1 01:56:04.565645 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Nov 1 01:56:04.565649 kernel: ACPI: XSDT 0x000000008C54F0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Nov 1 01:56:04.565654 kernel: ACPI: FACP 0x000000008C58B670 000114 (v06 01072009 AMI 00010013) Nov 1 01:56:04.565658 kernel: ACPI: DSDT 0x000000008C54F268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Nov 1 01:56:04.565664 kernel: ACPI: FACS 0x000000008C66DF80 000040 Nov 1 01:56:04.565669 kernel: ACPI: APIC 0x000000008C58B788 00012C (v04 01072009 AMI 00010013) Nov 1 01:56:04.565675 kernel: ACPI: FPDT 0x000000008C58B8B8 000044 (v01 01072009 AMI 00010013) Nov 1 01:56:04.565679 kernel: ACPI: FIDT 0x000000008C58B900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Nov 1 01:56:04.565684 kernel: ACPI: MCFG 0x000000008C58B9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Nov 1 01:56:04.565689 kernel: ACPI: SPMI 0x000000008C58B9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Nov 1 01:56:04.565694 kernel: ACPI: SSDT 0x000000008C58BA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Nov 1 01:56:04.565698 kernel: ACPI: SSDT 0x000000008C58D548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Nov 1 01:56:04.565703 kernel: ACPI: SSDT 0x000000008C590710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Nov 1 01:56:04.565708 kernel: ACPI: HPET 0x000000008C592A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 01:56:04.565713 kernel: ACPI: SSDT 0x000000008C592A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Nov 1 01:56:04.565718 kernel: ACPI: SSDT 0x000000008C593A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Nov 1 01:56:04.565722 kernel: ACPI: UEFI 0x000000008C594320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 01:56:04.565727 kernel: ACPI: LPIT 0x000000008C594368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 01:56:04.565732 kernel: ACPI: SSDT 0x000000008C594400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Nov 1 01:56:04.565736 kernel: ACPI: SSDT 0x000000008C596BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Nov 1 01:56:04.565741 kernel: ACPI: DBGP 0x000000008C5980C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 01:56:04.565746 kernel: ACPI: DBG2 0x000000008C598100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Nov 1 01:56:04.565751 kernel: ACPI: SSDT 0x000000008C598158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Nov 1 01:56:04.565756 kernel: ACPI: DMAR 0x000000008C599CC0 000070 (v01 INTEL EDK2 00000002 01000013) Nov 1 01:56:04.565760 kernel: ACPI: SSDT 0x000000008C599D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Nov 1 01:56:04.565765 kernel: ACPI: TPM2 0x000000008C599E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Nov 1 01:56:04.565770 kernel: ACPI: SSDT 0x000000008C599EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Nov 1 01:56:04.565775 kernel: ACPI: WSMT 0x000000008C59AC40 000028 (v01 SUPERM 01072009 AMI 00010013) Nov 1 01:56:04.565779 kernel: ACPI: EINJ 0x000000008C59AC68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Nov 1 01:56:04.565784 kernel: ACPI: ERST 0x000000008C59AD98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Nov 1 01:56:04.565789 kernel: ACPI: BERT 0x000000008C59AFC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Nov 1 01:56:04.565794 kernel: ACPI: HEST 0x000000008C59AFF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Nov 1 01:56:04.565799 kernel: ACPI: SSDT 0x000000008C59B278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Nov 1 01:56:04.565803 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58b670-0x8c58b783] Nov 1 01:56:04.565808 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54f268-0x8c58b66b] Nov 1 01:56:04.565813 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66df80-0x8c66dfbf] Nov 1 01:56:04.565817 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58b788-0x8c58b8b3] Nov 1 01:56:04.565822 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58b8b8-0x8c58b8fb] Nov 1 01:56:04.565826 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58b900-0x8c58b99b] Nov 1 01:56:04.565832 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58b9a0-0x8c58b9db] Nov 1 01:56:04.565836 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58b9e0-0x8c58ba20] Nov 1 01:56:04.565841 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58ba28-0x8c58d543] Nov 1 01:56:04.565846 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58d548-0x8c59070d] Nov 1 01:56:04.565850 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c590710-0x8c592a3a] Nov 1 01:56:04.565855 kernel: ACPI: Reserving HPET table memory at [mem 0x8c592a40-0x8c592a77] Nov 1 01:56:04.565860 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a78-0x8c593a25] Nov 1 01:56:04.565864 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593a28-0x8c59431b] Nov 1 01:56:04.565869 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c594320-0x8c594361] Nov 1 01:56:04.565874 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c594368-0x8c5943fb] Nov 1 01:56:04.565879 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c594400-0x8c596bdd] Nov 1 01:56:04.565884 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c596be0-0x8c5980c1] Nov 1 01:56:04.565888 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5980c8-0x8c5980fb] Nov 1 01:56:04.565893 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c598100-0x8c598153] Nov 1 01:56:04.565898 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598158-0x8c599cbe] Nov 1 01:56:04.565902 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c599cc0-0x8c599d2f] Nov 1 01:56:04.565907 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599d30-0x8c599e73] Nov 1 01:56:04.565911 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c599e78-0x8c599eab] Nov 1 01:56:04.565917 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599eb0-0x8c59ac3e] Nov 1 01:56:04.565921 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c59ac40-0x8c59ac67] Nov 1 01:56:04.565926 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c59ac68-0x8c59ad97] Nov 1 01:56:04.565931 kernel: ACPI: Reserving ERST table memory at [mem 0x8c59ad98-0x8c59afc7] Nov 1 01:56:04.565935 kernel: ACPI: Reserving BERT table memory at [mem 0x8c59afc8-0x8c59aff7] Nov 1 01:56:04.565940 kernel: ACPI: Reserving HEST table memory at [mem 0x8c59aff8-0x8c59b273] Nov 1 01:56:04.565945 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59b278-0x8c59b3d9] Nov 1 01:56:04.565949 kernel: No NUMA configuration found Nov 1 01:56:04.565954 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Nov 1 01:56:04.565959 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Nov 1 01:56:04.565964 kernel: Zone ranges: Nov 1 01:56:04.565969 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 01:56:04.565974 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 1 01:56:04.565978 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Nov 1 01:56:04.565983 kernel: Movable zone start for each node Nov 1 01:56:04.565988 kernel: Early memory node ranges Nov 1 01:56:04.565992 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Nov 1 01:56:04.565997 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Nov 1 01:56:04.566002 kernel: node 0: [mem 0x0000000040400000-0x00000000825bdfff] Nov 1 01:56:04.566007 kernel: node 0: [mem 0x00000000825c0000-0x000000008afcdfff] Nov 1 01:56:04.566012 kernel: node 0: [mem 0x000000008c0b3000-0x000000008c23bfff] Nov 1 01:56:04.566016 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Nov 1 01:56:04.566021 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Nov 1 01:56:04.566025 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Nov 1 01:56:04.566030 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 01:56:04.566038 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Nov 1 01:56:04.566044 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Nov 1 01:56:04.566049 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Nov 1 01:56:04.566054 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Nov 1 01:56:04.566060 kernel: On node 0, zone DMA32: 11459 pages in unavailable ranges Nov 1 01:56:04.566065 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Nov 1 01:56:04.566070 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Nov 1 01:56:04.566075 kernel: ACPI: PM-Timer IO Port: 0x1808 Nov 1 01:56:04.566080 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Nov 1 01:56:04.566085 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Nov 1 01:56:04.566090 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Nov 1 01:56:04.566095 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Nov 1 01:56:04.566100 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Nov 1 01:56:04.566105 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Nov 1 01:56:04.566110 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Nov 1 01:56:04.566115 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Nov 1 01:56:04.566120 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Nov 1 01:56:04.566125 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Nov 1 01:56:04.566130 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Nov 1 01:56:04.566135 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Nov 1 01:56:04.566141 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Nov 1 01:56:04.566146 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Nov 1 01:56:04.566150 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Nov 1 01:56:04.566155 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Nov 1 01:56:04.566160 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Nov 1 01:56:04.566165 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 1 01:56:04.566170 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 01:56:04.566175 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 01:56:04.566180 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 1 01:56:04.566186 kernel: TSC deadline timer available Nov 1 01:56:04.566191 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Nov 1 01:56:04.566196 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Nov 1 01:56:04.566201 kernel: Booting paravirtualized kernel on bare hardware Nov 1 01:56:04.566206 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 01:56:04.566212 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Nov 1 01:56:04.566217 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Nov 1 01:56:04.566222 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Nov 1 01:56:04.566226 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Nov 1 01:56:04.566232 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232416 Nov 1 01:56:04.566237 kernel: Policy zone: Normal Nov 1 01:56:04.566243 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 01:56:04.566248 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 1 01:56:04.566253 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Nov 1 01:56:04.566258 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Nov 1 01:56:04.566263 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 01:56:04.566269 kernel: Memory: 32722608K/33452984K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47496K init, 4084K bss, 730116K reserved, 0K cma-reserved) Nov 1 01:56:04.566274 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Nov 1 01:56:04.566279 kernel: ftrace: allocating 34614 entries in 136 pages Nov 1 01:56:04.566284 kernel: ftrace: allocated 136 pages with 2 groups Nov 1 01:56:04.566289 kernel: rcu: Hierarchical RCU implementation. Nov 1 01:56:04.566294 kernel: rcu: RCU event tracing is enabled. Nov 1 01:56:04.566300 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Nov 1 01:56:04.566305 kernel: Rude variant of Tasks RCU enabled. Nov 1 01:56:04.566310 kernel: Tracing variant of Tasks RCU enabled. Nov 1 01:56:04.566316 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 01:56:04.566321 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Nov 1 01:56:04.566329 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Nov 1 01:56:04.566334 kernel: random: crng init done Nov 1 01:56:04.566339 kernel: Console: colour dummy device 80x25 Nov 1 01:56:04.566363 kernel: printk: console [tty0] enabled Nov 1 01:56:04.566368 kernel: printk: console [ttyS1] enabled Nov 1 01:56:04.566387 kernel: ACPI: Core revision 20210730 Nov 1 01:56:04.566392 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Nov 1 01:56:04.566397 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 01:56:04.566403 kernel: DMAR: Host address width 39 Nov 1 01:56:04.566408 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Nov 1 01:56:04.566413 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Nov 1 01:56:04.566418 kernel: DMAR: RMRR base: 0x0000008cf19000 end: 0x0000008d162fff Nov 1 01:56:04.566423 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Nov 1 01:56:04.566428 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Nov 1 01:56:04.566433 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Nov 1 01:56:04.566438 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Nov 1 01:56:04.566443 kernel: x2apic enabled Nov 1 01:56:04.566448 kernel: Switched APIC routing to cluster x2apic. Nov 1 01:56:04.566453 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Nov 1 01:56:04.566459 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Nov 1 01:56:04.566464 kernel: CPU0: Thermal monitoring enabled (TM1) Nov 1 01:56:04.566469 kernel: process: using mwait in idle threads Nov 1 01:56:04.566474 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 1 01:56:04.566478 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 1 01:56:04.566483 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 01:56:04.566488 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Nov 1 01:56:04.566494 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Nov 1 01:56:04.566499 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Nov 1 01:56:04.566504 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Nov 1 01:56:04.566509 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Nov 1 01:56:04.566514 kernel: RETBleed: Mitigation: Enhanced IBRS Nov 1 01:56:04.566519 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 01:56:04.566524 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Nov 1 01:56:04.566529 kernel: TAA: Mitigation: TSX disabled Nov 1 01:56:04.566534 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Nov 1 01:56:04.566539 kernel: SRBDS: Mitigation: Microcode Nov 1 01:56:04.566543 kernel: GDS: Mitigation: Microcode Nov 1 01:56:04.566549 kernel: active return thunk: its_return_thunk Nov 1 01:56:04.566554 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 1 01:56:04.566559 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 01:56:04.566564 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 01:56:04.566569 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 01:56:04.566574 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Nov 1 01:56:04.566579 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Nov 1 01:56:04.566584 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 01:56:04.566589 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Nov 1 01:56:04.566594 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Nov 1 01:56:04.566598 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Nov 1 01:56:04.566604 kernel: Freeing SMP alternatives memory: 32K Nov 1 01:56:04.566609 kernel: pid_max: default: 32768 minimum: 301 Nov 1 01:56:04.566614 kernel: LSM: Security Framework initializing Nov 1 01:56:04.566619 kernel: SELinux: Initializing. Nov 1 01:56:04.566624 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 01:56:04.566629 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 01:56:04.566634 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Nov 1 01:56:04.566639 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Nov 1 01:56:04.566644 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Nov 1 01:56:04.566649 kernel: ... version: 4 Nov 1 01:56:04.566654 kernel: ... bit width: 48 Nov 1 01:56:04.566660 kernel: ... generic registers: 4 Nov 1 01:56:04.566665 kernel: ... value mask: 0000ffffffffffff Nov 1 01:56:04.566670 kernel: ... max period: 00007fffffffffff Nov 1 01:56:04.566675 kernel: ... fixed-purpose events: 3 Nov 1 01:56:04.566680 kernel: ... event mask: 000000070000000f Nov 1 01:56:04.566685 kernel: signal: max sigframe size: 2032 Nov 1 01:56:04.566690 kernel: rcu: Hierarchical SRCU implementation. Nov 1 01:56:04.566695 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Nov 1 01:56:04.566700 kernel: smp: Bringing up secondary CPUs ... Nov 1 01:56:04.566705 kernel: x86: Booting SMP configuration: Nov 1 01:56:04.566710 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Nov 1 01:56:04.566716 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 1 01:56:04.566721 kernel: #9 #10 #11 #12 #13 #14 #15 Nov 1 01:56:04.566726 kernel: smp: Brought up 1 node, 16 CPUs Nov 1 01:56:04.566731 kernel: smpboot: Max logical packages: 1 Nov 1 01:56:04.566736 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Nov 1 01:56:04.566741 kernel: devtmpfs: initialized Nov 1 01:56:04.566746 kernel: x86/mm: Memory block size: 128MB Nov 1 01:56:04.566752 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x825be000-0x825befff] (4096 bytes) Nov 1 01:56:04.566757 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23c000-0x8c66dfff] (4399104 bytes) Nov 1 01:56:04.566762 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 01:56:04.566767 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Nov 1 01:56:04.566772 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 01:56:04.566777 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 01:56:04.566782 kernel: audit: initializing netlink subsys (disabled) Nov 1 01:56:04.566787 kernel: audit: type=2000 audit(1761962159.041:1): state=initialized audit_enabled=0 res=1 Nov 1 01:56:04.566792 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 01:56:04.566798 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 01:56:04.566803 kernel: cpuidle: using governor menu Nov 1 01:56:04.566807 kernel: ACPI: bus type PCI registered Nov 1 01:56:04.566813 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 01:56:04.566818 kernel: dca service started, version 1.12.1 Nov 1 01:56:04.566822 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Nov 1 01:56:04.566827 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Nov 1 01:56:04.566833 kernel: PCI: Using configuration type 1 for base access Nov 1 01:56:04.566838 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Nov 1 01:56:04.566843 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 01:56:04.566848 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 01:56:04.566853 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 01:56:04.566858 kernel: ACPI: Added _OSI(Module Device) Nov 1 01:56:04.566863 kernel: ACPI: Added _OSI(Processor Device) Nov 1 01:56:04.566868 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 01:56:04.566873 kernel: ACPI: Added _OSI(Linux-Dell-Video) Nov 1 01:56:04.566878 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Nov 1 01:56:04.566883 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Nov 1 01:56:04.566889 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Nov 1 01:56:04.566894 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:56:04.566899 kernel: ACPI: SSDT 0xFFFF90350021BF00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Nov 1 01:56:04.566904 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Nov 1 01:56:04.566909 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:56:04.566914 kernel: ACPI: SSDT 0xFFFF903501AE0800 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Nov 1 01:56:04.566919 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:56:04.566924 kernel: ACPI: SSDT 0xFFFF903501A58000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Nov 1 01:56:04.566929 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:56:04.566934 kernel: ACPI: SSDT 0xFFFF903501B4B000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Nov 1 01:56:04.566939 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:56:04.566944 kernel: ACPI: SSDT 0xFFFF90350014F000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Nov 1 01:56:04.566949 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:56:04.566954 kernel: ACPI: SSDT 0xFFFF903501AE4800 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Nov 1 01:56:04.566959 kernel: ACPI: Interpreter enabled Nov 1 01:56:04.566964 kernel: ACPI: PM: (supports S0 S5) Nov 1 01:56:04.566969 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 01:56:04.566974 kernel: HEST: Enabling Firmware First mode for corrected errors. Nov 1 01:56:04.566979 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Nov 1 01:56:04.566985 kernel: HEST: Table parsing has been initialized. Nov 1 01:56:04.566990 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Nov 1 01:56:04.566995 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 01:56:04.567000 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Nov 1 01:56:04.567005 kernel: ACPI: PM: Power Resource [USBC] Nov 1 01:56:04.567010 kernel: ACPI: PM: Power Resource [V0PR] Nov 1 01:56:04.567015 kernel: ACPI: PM: Power Resource [V1PR] Nov 1 01:56:04.567020 kernel: ACPI: PM: Power Resource [V2PR] Nov 1 01:56:04.567025 kernel: ACPI: PM: Power Resource [WRST] Nov 1 01:56:04.567030 kernel: ACPI: PM: Power Resource [FN00] Nov 1 01:56:04.567035 kernel: ACPI: PM: Power Resource [FN01] Nov 1 01:56:04.567040 kernel: ACPI: PM: Power Resource [FN02] Nov 1 01:56:04.567045 kernel: ACPI: PM: Power Resource [FN03] Nov 1 01:56:04.567050 kernel: ACPI: PM: Power Resource [FN04] Nov 1 01:56:04.567055 kernel: ACPI: PM: Power Resource [PIN] Nov 1 01:56:04.567060 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Nov 1 01:56:04.567127 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 1 01:56:04.567177 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Nov 1 01:56:04.567220 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Nov 1 01:56:04.567227 kernel: PCI host bridge to bus 0000:00 Nov 1 01:56:04.567273 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 01:56:04.567313 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 01:56:04.567376 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 01:56:04.567416 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Nov 1 01:56:04.567456 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Nov 1 01:56:04.567496 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Nov 1 01:56:04.567551 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Nov 1 01:56:04.567605 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Nov 1 01:56:04.567652 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Nov 1 01:56:04.567702 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Nov 1 01:56:04.567750 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Nov 1 01:56:04.567798 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Nov 1 01:56:04.567844 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Nov 1 01:56:04.567892 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Nov 1 01:56:04.567937 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Nov 1 01:56:04.567983 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Nov 1 01:56:04.568035 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Nov 1 01:56:04.568079 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Nov 1 01:56:04.568124 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Nov 1 01:56:04.568172 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Nov 1 01:56:04.568217 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 1 01:56:04.568269 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Nov 1 01:56:04.568315 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 1 01:56:04.568369 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Nov 1 01:56:04.568413 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Nov 1 01:56:04.568458 kernel: pci 0000:00:16.0: PME# supported from D3hot Nov 1 01:56:04.568505 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Nov 1 01:56:04.568550 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Nov 1 01:56:04.568594 kernel: pci 0000:00:16.1: PME# supported from D3hot Nov 1 01:56:04.568643 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Nov 1 01:56:04.568688 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Nov 1 01:56:04.568731 kernel: pci 0000:00:16.4: PME# supported from D3hot Nov 1 01:56:04.568779 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Nov 1 01:56:04.568823 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Nov 1 01:56:04.568868 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Nov 1 01:56:04.568919 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Nov 1 01:56:04.568965 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Nov 1 01:56:04.569010 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Nov 1 01:56:04.569054 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Nov 1 01:56:04.569098 kernel: pci 0000:00:17.0: PME# supported from D3hot Nov 1 01:56:04.569146 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Nov 1 01:56:04.569191 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Nov 1 01:56:04.569240 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Nov 1 01:56:04.569287 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Nov 1 01:56:04.569345 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Nov 1 01:56:04.569390 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Nov 1 01:56:04.569440 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Nov 1 01:56:04.569485 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Nov 1 01:56:04.569536 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Nov 1 01:56:04.569581 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Nov 1 01:56:04.569630 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Nov 1 01:56:04.569676 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 1 01:56:04.569725 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Nov 1 01:56:04.569776 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Nov 1 01:56:04.569820 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Nov 1 01:56:04.569865 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Nov 1 01:56:04.569914 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Nov 1 01:56:04.569960 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Nov 1 01:56:04.570010 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Nov 1 01:56:04.570059 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Nov 1 01:56:04.570106 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Nov 1 01:56:04.570151 kernel: pci 0000:01:00.0: PME# supported from D3cold Nov 1 01:56:04.570198 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Nov 1 01:56:04.570244 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Nov 1 01:56:04.570296 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Nov 1 01:56:04.570346 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Nov 1 01:56:04.570393 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Nov 1 01:56:04.570439 kernel: pci 0000:01:00.1: PME# supported from D3cold Nov 1 01:56:04.570485 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Nov 1 01:56:04.570531 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Nov 1 01:56:04.570576 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 1 01:56:04.570621 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Nov 1 01:56:04.570665 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Nov 1 01:56:04.570713 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Nov 1 01:56:04.570763 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Nov 1 01:56:04.570811 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Nov 1 01:56:04.570857 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Nov 1 01:56:04.570903 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Nov 1 01:56:04.570981 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Nov 1 01:56:04.571045 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Nov 1 01:56:04.571093 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Nov 1 01:56:04.571138 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Nov 1 01:56:04.571183 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Nov 1 01:56:04.571233 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Nov 1 01:56:04.571280 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Nov 1 01:56:04.571328 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Nov 1 01:56:04.571395 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Nov 1 01:56:04.571443 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Nov 1 01:56:04.571489 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Nov 1 01:56:04.571534 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Nov 1 01:56:04.571578 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Nov 1 01:56:04.571622 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Nov 1 01:56:04.571666 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Nov 1 01:56:04.571716 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Nov 1 01:56:04.571763 kernel: pci 0000:06:00.0: enabling Extended Tags Nov 1 01:56:04.571810 kernel: pci 0000:06:00.0: supports D1 D2 Nov 1 01:56:04.571856 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 1 01:56:04.571900 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Nov 1 01:56:04.571945 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Nov 1 01:56:04.571989 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Nov 1 01:56:04.572039 kernel: pci_bus 0000:07: extended config space not accessible Nov 1 01:56:04.572092 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Nov 1 01:56:04.572142 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Nov 1 01:56:04.572190 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Nov 1 01:56:04.572238 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Nov 1 01:56:04.572286 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 01:56:04.572336 kernel: pci 0000:07:00.0: supports D1 D2 Nov 1 01:56:04.572430 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 1 01:56:04.572476 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Nov 1 01:56:04.572522 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Nov 1 01:56:04.572571 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Nov 1 01:56:04.572578 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Nov 1 01:56:04.572584 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Nov 1 01:56:04.572590 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Nov 1 01:56:04.572595 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Nov 1 01:56:04.572600 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Nov 1 01:56:04.572606 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Nov 1 01:56:04.572611 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Nov 1 01:56:04.572618 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Nov 1 01:56:04.572623 kernel: iommu: Default domain type: Translated Nov 1 01:56:04.572629 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 01:56:04.572677 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Nov 1 01:56:04.572724 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 01:56:04.572772 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Nov 1 01:56:04.572779 kernel: vgaarb: loaded Nov 1 01:56:04.572785 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 1 01:56:04.572790 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 1 01:56:04.572797 kernel: PTP clock support registered Nov 1 01:56:04.572803 kernel: PCI: Using ACPI for IRQ routing Nov 1 01:56:04.572808 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 01:56:04.572813 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Nov 1 01:56:04.572819 kernel: e820: reserve RAM buffer [mem 0x825be000-0x83ffffff] Nov 1 01:56:04.572824 kernel: e820: reserve RAM buffer [mem 0x8afce000-0x8bffffff] Nov 1 01:56:04.572829 kernel: e820: reserve RAM buffer [mem 0x8c23c000-0x8fffffff] Nov 1 01:56:04.572834 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Nov 1 01:56:04.572840 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Nov 1 01:56:04.572846 kernel: clocksource: Switched to clocksource tsc-early Nov 1 01:56:04.572851 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 01:56:04.572857 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 01:56:04.572863 kernel: pnp: PnP ACPI init Nov 1 01:56:04.572907 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Nov 1 01:56:04.572952 kernel: pnp 00:02: [dma 0 disabled] Nov 1 01:56:04.572995 kernel: pnp 00:03: [dma 0 disabled] Nov 1 01:56:04.573041 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Nov 1 01:56:04.573082 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Nov 1 01:56:04.573124 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Nov 1 01:56:04.573168 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Nov 1 01:56:04.573208 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Nov 1 01:56:04.573248 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Nov 1 01:56:04.573290 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Nov 1 01:56:04.573352 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Nov 1 01:56:04.573413 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Nov 1 01:56:04.573451 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Nov 1 01:56:04.573491 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Nov 1 01:56:04.573533 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Nov 1 01:56:04.573574 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Nov 1 01:56:04.573615 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Nov 1 01:56:04.573655 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Nov 1 01:56:04.573693 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Nov 1 01:56:04.573732 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Nov 1 01:56:04.573772 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Nov 1 01:56:04.573814 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Nov 1 01:56:04.573821 kernel: pnp: PnP ACPI: found 10 devices Nov 1 01:56:04.573828 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 01:56:04.573834 kernel: NET: Registered PF_INET protocol family Nov 1 01:56:04.573839 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 01:56:04.573845 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Nov 1 01:56:04.573850 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 01:56:04.573856 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 01:56:04.573861 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Nov 1 01:56:04.573867 kernel: TCP: Hash tables configured (established 262144 bind 65536) Nov 1 01:56:04.573872 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 1 01:56:04.573878 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 1 01:56:04.573884 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 01:56:04.573889 kernel: NET: Registered PF_XDP protocol family Nov 1 01:56:04.573934 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Nov 1 01:56:04.573978 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Nov 1 01:56:04.574023 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Nov 1 01:56:04.574069 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Nov 1 01:56:04.574115 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Nov 1 01:56:04.574163 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Nov 1 01:56:04.574209 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Nov 1 01:56:04.574254 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 1 01:56:04.574298 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Nov 1 01:56:04.574368 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Nov 1 01:56:04.574413 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Nov 1 01:56:04.574460 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Nov 1 01:56:04.574505 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Nov 1 01:56:04.574552 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Nov 1 01:56:04.574597 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Nov 1 01:56:04.574642 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Nov 1 01:56:04.574687 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Nov 1 01:56:04.574731 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Nov 1 01:56:04.574781 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Nov 1 01:56:04.574828 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Nov 1 01:56:04.574875 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Nov 1 01:56:04.574919 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Nov 1 01:56:04.574964 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Nov 1 01:56:04.575008 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Nov 1 01:56:04.575049 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Nov 1 01:56:04.575089 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 01:56:04.575128 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 01:56:04.575169 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 01:56:04.575208 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Nov 1 01:56:04.575247 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Nov 1 01:56:04.575294 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Nov 1 01:56:04.575339 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Nov 1 01:56:04.575387 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Nov 1 01:56:04.575430 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Nov 1 01:56:04.575477 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Nov 1 01:56:04.575518 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Nov 1 01:56:04.575564 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Nov 1 01:56:04.575607 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Nov 1 01:56:04.575651 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Nov 1 01:56:04.575695 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Nov 1 01:56:04.575704 kernel: PCI: CLS 64 bytes, default 64 Nov 1 01:56:04.575710 kernel: DMAR: No ATSR found Nov 1 01:56:04.575716 kernel: DMAR: No SATC found Nov 1 01:56:04.575721 kernel: DMAR: dmar0: Using Queued invalidation Nov 1 01:56:04.575766 kernel: pci 0000:00:00.0: Adding to iommu group 0 Nov 1 01:56:04.575814 kernel: pci 0000:00:01.0: Adding to iommu group 1 Nov 1 01:56:04.575860 kernel: pci 0000:00:08.0: Adding to iommu group 2 Nov 1 01:56:04.575906 kernel: pci 0000:00:12.0: Adding to iommu group 3 Nov 1 01:56:04.575953 kernel: pci 0000:00:14.0: Adding to iommu group 4 Nov 1 01:56:04.575997 kernel: pci 0000:00:14.2: Adding to iommu group 4 Nov 1 01:56:04.576042 kernel: pci 0000:00:15.0: Adding to iommu group 5 Nov 1 01:56:04.576085 kernel: pci 0000:00:15.1: Adding to iommu group 5 Nov 1 01:56:04.576130 kernel: pci 0000:00:16.0: Adding to iommu group 6 Nov 1 01:56:04.576175 kernel: pci 0000:00:16.1: Adding to iommu group 6 Nov 1 01:56:04.576220 kernel: pci 0000:00:16.4: Adding to iommu group 6 Nov 1 01:56:04.576263 kernel: pci 0000:00:17.0: Adding to iommu group 7 Nov 1 01:56:04.576309 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Nov 1 01:56:04.576358 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Nov 1 01:56:04.576402 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Nov 1 01:56:04.576448 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Nov 1 01:56:04.576492 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Nov 1 01:56:04.576537 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Nov 1 01:56:04.576581 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Nov 1 01:56:04.576627 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Nov 1 01:56:04.576671 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Nov 1 01:56:04.576720 kernel: pci 0000:01:00.0: Adding to iommu group 1 Nov 1 01:56:04.576767 kernel: pci 0000:01:00.1: Adding to iommu group 1 Nov 1 01:56:04.576813 kernel: pci 0000:03:00.0: Adding to iommu group 15 Nov 1 01:56:04.576860 kernel: pci 0000:04:00.0: Adding to iommu group 16 Nov 1 01:56:04.576906 kernel: pci 0000:06:00.0: Adding to iommu group 17 Nov 1 01:56:04.576954 kernel: pci 0000:07:00.0: Adding to iommu group 17 Nov 1 01:56:04.576962 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Nov 1 01:56:04.576968 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 1 01:56:04.576975 kernel: software IO TLB: mapped [mem 0x0000000086fce000-0x000000008afce000] (64MB) Nov 1 01:56:04.576980 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Nov 1 01:56:04.576986 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Nov 1 01:56:04.576991 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Nov 1 01:56:04.576997 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Nov 1 01:56:04.577044 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Nov 1 01:56:04.577052 kernel: Initialise system trusted keyrings Nov 1 01:56:04.577058 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Nov 1 01:56:04.577065 kernel: Key type asymmetric registered Nov 1 01:56:04.577070 kernel: Asymmetric key parser 'x509' registered Nov 1 01:56:04.577075 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 1 01:56:04.577081 kernel: io scheduler mq-deadline registered Nov 1 01:56:04.577087 kernel: io scheduler kyber registered Nov 1 01:56:04.577092 kernel: io scheduler bfq registered Nov 1 01:56:04.577137 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Nov 1 01:56:04.577182 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Nov 1 01:56:04.577228 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Nov 1 01:56:04.577275 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Nov 1 01:56:04.577320 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Nov 1 01:56:04.577369 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Nov 1 01:56:04.577419 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Nov 1 01:56:04.577427 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Nov 1 01:56:04.577433 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Nov 1 01:56:04.577438 kernel: pstore: Registered erst as persistent store backend Nov 1 01:56:04.577444 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 01:56:04.577451 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 01:56:04.577456 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 01:56:04.577462 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 1 01:56:04.577468 kernel: hpet_acpi_add: no address or irqs in _CRS Nov 1 01:56:04.577516 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Nov 1 01:56:04.577524 kernel: i8042: PNP: No PS/2 controller found. Nov 1 01:56:04.577583 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Nov 1 01:56:04.577626 kernel: rtc_cmos rtc_cmos: registered as rtc0 Nov 1 01:56:04.577668 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-11-01T01:56:03 UTC (1761962163) Nov 1 01:56:04.577709 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Nov 1 01:56:04.577716 kernel: intel_pstate: Intel P-state driver initializing Nov 1 01:56:04.577722 kernel: intel_pstate: Disabling energy efficiency optimization Nov 1 01:56:04.577727 kernel: intel_pstate: HWP enabled Nov 1 01:56:04.577733 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Nov 1 01:56:04.577738 kernel: vesafb: scrolling: redraw Nov 1 01:56:04.577744 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Nov 1 01:56:04.577751 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x00000000f610468e, using 768k, total 768k Nov 1 01:56:04.577756 kernel: Console: switching to colour frame buffer device 128x48 Nov 1 01:56:04.577761 kernel: fb0: VESA VGA frame buffer device Nov 1 01:56:04.577767 kernel: NET: Registered PF_INET6 protocol family Nov 1 01:56:04.577772 kernel: Segment Routing with IPv6 Nov 1 01:56:04.577778 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 01:56:04.577783 kernel: NET: Registered PF_PACKET protocol family Nov 1 01:56:04.577789 kernel: Key type dns_resolver registered Nov 1 01:56:04.577794 kernel: microcode: sig=0x906ed, pf=0x2, revision=0x102 Nov 1 01:56:04.577800 kernel: microcode: Microcode Update Driver: v2.2. Nov 1 01:56:04.577805 kernel: IPI shorthand broadcast: enabled Nov 1 01:56:04.577811 kernel: sched_clock: Marking stable (1689562594, 1340014925)->(4483279175, -1453701656) Nov 1 01:56:04.577816 kernel: registered taskstats version 1 Nov 1 01:56:04.577821 kernel: Loading compiled-in X.509 certificates Nov 1 01:56:04.577827 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: f2055682e6899ad8548fd369019e7b47939b46a0' Nov 1 01:56:04.577832 kernel: Key type .fscrypt registered Nov 1 01:56:04.577837 kernel: Key type fscrypt-provisioning registered Nov 1 01:56:04.577843 kernel: pstore: Using crash dump compression: deflate Nov 1 01:56:04.577849 kernel: ima: Allocated hash algorithm: sha1 Nov 1 01:56:04.577854 kernel: ima: No architecture policies found Nov 1 01:56:04.577860 kernel: clk: Disabling unused clocks Nov 1 01:56:04.577865 kernel: Freeing unused kernel image (initmem) memory: 47496K Nov 1 01:56:04.577871 kernel: Write protecting the kernel read-only data: 28672k Nov 1 01:56:04.577876 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Nov 1 01:56:04.577882 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Nov 1 01:56:04.577887 kernel: Run /init as init process Nov 1 01:56:04.577892 kernel: with arguments: Nov 1 01:56:04.577898 kernel: /init Nov 1 01:56:04.577904 kernel: with environment: Nov 1 01:56:04.577909 kernel: HOME=/ Nov 1 01:56:04.577914 kernel: TERM=linux Nov 1 01:56:04.577919 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 1 01:56:04.577926 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 01:56:04.577933 systemd[1]: Detected architecture x86-64. Nov 1 01:56:04.577939 systemd[1]: Running in initrd. Nov 1 01:56:04.577945 systemd[1]: No hostname configured, using default hostname. Nov 1 01:56:04.577951 systemd[1]: Hostname set to . Nov 1 01:56:04.577956 systemd[1]: Initializing machine ID from random generator. Nov 1 01:56:04.577962 systemd[1]: Queued start job for default target initrd.target. Nov 1 01:56:04.577968 systemd[1]: Started systemd-ask-password-console.path. Nov 1 01:56:04.577973 systemd[1]: Reached target cryptsetup.target. Nov 1 01:56:04.577979 systemd[1]: Reached target paths.target. Nov 1 01:56:04.577984 systemd[1]: Reached target slices.target. Nov 1 01:56:04.577991 systemd[1]: Reached target swap.target. Nov 1 01:56:04.577996 systemd[1]: Reached target timers.target. Nov 1 01:56:04.578001 systemd[1]: Listening on iscsid.socket. Nov 1 01:56:04.578007 systemd[1]: Listening on iscsiuio.socket. Nov 1 01:56:04.578013 systemd[1]: Listening on systemd-journald-audit.socket. Nov 1 01:56:04.578018 systemd[1]: Listening on systemd-journald-dev-log.socket. Nov 1 01:56:04.578024 systemd[1]: Listening on systemd-journald.socket. Nov 1 01:56:04.578030 systemd[1]: Listening on systemd-networkd.socket. Nov 1 01:56:04.578036 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 01:56:04.578041 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Nov 1 01:56:04.578047 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Nov 1 01:56:04.578052 kernel: clocksource: Switched to clocksource tsc Nov 1 01:56:04.578058 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 01:56:04.578063 systemd[1]: Reached target sockets.target. Nov 1 01:56:04.578069 systemd[1]: Starting kmod-static-nodes.service... Nov 1 01:56:04.578075 systemd[1]: Finished network-cleanup.service. Nov 1 01:56:04.578081 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 01:56:04.578087 systemd[1]: Starting systemd-journald.service... Nov 1 01:56:04.578092 systemd[1]: Starting systemd-modules-load.service... Nov 1 01:56:04.578099 systemd-journald[268]: Journal started Nov 1 01:56:04.578127 systemd-journald[268]: Runtime Journal (/run/log/journal/a4c44bf0eebd45daa57d9efb3c6f9c74) is 8.0M, max 640.1M, 632.1M free. Nov 1 01:56:04.580511 systemd-modules-load[269]: Inserted module 'overlay' Nov 1 01:56:04.586000 audit: BPF prog-id=6 op=LOAD Nov 1 01:56:04.604393 kernel: audit: type=1334 audit(1761962164.586:2): prog-id=6 op=LOAD Nov 1 01:56:04.604422 systemd[1]: Starting systemd-resolved.service... Nov 1 01:56:04.653385 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 01:56:04.653401 systemd[1]: Starting systemd-vconsole-setup.service... Nov 1 01:56:04.686369 kernel: Bridge firewalling registered Nov 1 01:56:04.686386 systemd[1]: Started systemd-journald.service. Nov 1 01:56:04.700295 systemd-modules-load[269]: Inserted module 'br_netfilter' Nov 1 01:56:04.748058 kernel: audit: type=1130 audit(1761962164.708:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:04.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:04.703097 systemd-resolved[271]: Positive Trust Anchors: Nov 1 01:56:04.812387 kernel: SCSI subsystem initialized Nov 1 01:56:04.812402 kernel: audit: type=1130 audit(1761962164.762:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:04.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:04.703102 systemd-resolved[271]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 01:56:04.927416 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 01:56:04.927429 kernel: audit: type=1130 audit(1761962164.832:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:04.927437 kernel: device-mapper: uevent: version 1.0.3 Nov 1 01:56:04.927444 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Nov 1 01:56:04.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:04.703124 systemd-resolved[271]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 01:56:05.001586 kernel: audit: type=1130 audit(1761962164.935:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:04.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:04.704741 systemd-resolved[271]: Defaulting to hostname 'linux'. Nov 1 01:56:05.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:04.708559 systemd[1]: Started systemd-resolved.service. Nov 1 01:56:05.111408 kernel: audit: type=1130 audit(1761962165.010:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:05.111419 kernel: audit: type=1130 audit(1761962165.065:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:05.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:04.762515 systemd[1]: Finished kmod-static-nodes.service. Nov 1 01:56:04.832506 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 01:56:04.929516 systemd-modules-load[269]: Inserted module 'dm_multipath' Nov 1 01:56:04.935515 systemd[1]: Finished systemd-modules-load.service. Nov 1 01:56:05.010702 systemd[1]: Finished systemd-vconsole-setup.service. Nov 1 01:56:05.065621 systemd[1]: Reached target nss-lookup.target. Nov 1 01:56:05.119935 systemd[1]: Starting dracut-cmdline-ask.service... Nov 1 01:56:05.139861 systemd[1]: Starting systemd-sysctl.service... Nov 1 01:56:05.140161 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Nov 1 01:56:05.143099 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Nov 1 01:56:05.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:05.143851 systemd[1]: Finished systemd-sysctl.service. Nov 1 01:56:05.254820 kernel: audit: type=1130 audit(1761962165.142:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:05.254833 kernel: audit: type=1130 audit(1761962165.206:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:05.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:05.206678 systemd[1]: Finished dracut-cmdline-ask.service. Nov 1 01:56:05.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:05.263952 systemd[1]: Starting dracut-cmdline.service... Nov 1 01:56:05.285446 dracut-cmdline[295]: dracut-dracut-053 Nov 1 01:56:05.285446 dracut-cmdline[295]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Nov 1 01:56:05.285446 dracut-cmdline[295]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 01:56:05.355411 kernel: Loading iSCSI transport class v2.0-870. Nov 1 01:56:05.355424 kernel: iscsi: registered transport (tcp) Nov 1 01:56:05.413052 kernel: iscsi: registered transport (qla4xxx) Nov 1 01:56:05.413069 kernel: QLogic iSCSI HBA Driver Nov 1 01:56:05.429066 systemd[1]: Finished dracut-cmdline.service. Nov 1 01:56:05.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:05.438046 systemd[1]: Starting dracut-pre-udev.service... Nov 1 01:56:05.493393 kernel: raid6: avx2x4 gen() 48697 MB/s Nov 1 01:56:05.528362 kernel: raid6: avx2x4 xor() 21748 MB/s Nov 1 01:56:05.563363 kernel: raid6: avx2x2 gen() 53578 MB/s Nov 1 01:56:05.598402 kernel: raid6: avx2x2 xor() 32078 MB/s Nov 1 01:56:05.633364 kernel: raid6: avx2x1 gen() 45108 MB/s Nov 1 01:56:05.668400 kernel: raid6: avx2x1 xor() 27823 MB/s Nov 1 01:56:05.703368 kernel: raid6: sse2x4 gen() 21314 MB/s Nov 1 01:56:05.737363 kernel: raid6: sse2x4 xor() 11988 MB/s Nov 1 01:56:05.771401 kernel: raid6: sse2x2 gen() 21616 MB/s Nov 1 01:56:05.805363 kernel: raid6: sse2x2 xor() 13412 MB/s Nov 1 01:56:05.839363 kernel: raid6: sse2x1 gen() 18326 MB/s Nov 1 01:56:05.891260 kernel: raid6: sse2x1 xor() 8929 MB/s Nov 1 01:56:05.891275 kernel: raid6: using algorithm avx2x2 gen() 53578 MB/s Nov 1 01:56:05.891283 kernel: raid6: .... xor() 32078 MB/s, rmw enabled Nov 1 01:56:05.909452 kernel: raid6: using avx2x2 recovery algorithm Nov 1 01:56:05.955360 kernel: xor: automatically using best checksumming function avx Nov 1 01:56:06.036382 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Nov 1 01:56:06.041814 systemd[1]: Finished dracut-pre-udev.service. Nov 1 01:56:06.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:06.051000 audit: BPF prog-id=7 op=LOAD Nov 1 01:56:06.051000 audit: BPF prog-id=8 op=LOAD Nov 1 01:56:06.052271 systemd[1]: Starting systemd-udevd.service... Nov 1 01:56:06.059913 systemd-udevd[475]: Using default interface naming scheme 'v252'. Nov 1 01:56:06.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:06.065595 systemd[1]: Started systemd-udevd.service. Nov 1 01:56:06.106455 dracut-pre-trigger[486]: rd.md=0: removing MD RAID activation Nov 1 01:56:06.082985 systemd[1]: Starting dracut-pre-trigger.service... Nov 1 01:56:06.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:06.112473 systemd[1]: Finished dracut-pre-trigger.service. Nov 1 01:56:06.124602 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 01:56:06.225340 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 01:56:06.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:06.254339 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 01:56:06.292040 kernel: ACPI: bus type USB registered Nov 1 01:56:06.292137 kernel: usbcore: registered new interface driver usbfs Nov 1 01:56:06.292145 kernel: usbcore: registered new interface driver hub Nov 1 01:56:06.310290 kernel: usbcore: registered new device driver usb Nov 1 01:56:06.352335 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 01:56:06.352371 kernel: libata version 3.00 loaded. Nov 1 01:56:06.352381 kernel: AES CTR mode by8 optimization enabled Nov 1 01:56:06.406564 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Nov 1 01:56:06.406598 kernel: mlx5_core 0000:01:00.0: firmware version: 14.31.1014 Nov 1 01:56:07.037946 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Nov 1 01:56:07.037963 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Nov 1 01:56:07.038031 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Nov 1 01:56:07.038124 kernel: igb 0000:03:00.0: added PHC on eth0 Nov 1 01:56:07.038381 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Nov 1 01:56:07.038459 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Nov 1 01:56:07.038520 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Nov 1 01:56:07.038578 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:d3:7e Nov 1 01:56:07.038640 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Nov 1 01:56:07.038699 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Nov 1 01:56:07.038761 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Nov 1 01:56:07.038819 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Nov 1 01:56:07.038879 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Nov 1 01:56:07.038937 kernel: hub 1-0:1.0: USB hub found Nov 1 01:56:07.039008 kernel: hub 1-0:1.0: 16 ports detected Nov 1 01:56:07.039072 kernel: igb 0000:04:00.0: added PHC on eth1 Nov 1 01:56:07.039136 kernel: hub 2-0:1.0: USB hub found Nov 1 01:56:07.039204 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Nov 1 01:56:07.039265 kernel: hub 2-0:1.0: 10 ports detected Nov 1 01:56:07.039332 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:d3:7f Nov 1 01:56:07.039445 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Nov 1 01:56:07.039508 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Nov 1 01:56:07.039568 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Nov 1 01:56:07.039630 kernel: ahci 0000:00:17.0: version 3.0 Nov 1 01:56:07.162599 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Nov 1 01:56:07.162792 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Nov 1 01:56:07.162926 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Nov 1 01:56:07.162981 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Nov 1 01:56:07.163033 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Nov 1 01:56:07.163085 kernel: scsi host0: ahci Nov 1 01:56:07.163144 kernel: scsi host1: ahci Nov 1 01:56:07.163201 kernel: scsi host2: ahci Nov 1 01:56:07.163255 kernel: scsi host3: ahci Nov 1 01:56:07.163310 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Nov 1 01:56:07.163453 kernel: scsi host4: ahci Nov 1 01:56:07.163515 kernel: scsi host5: ahci Nov 1 01:56:07.163569 kernel: scsi host6: ahci Nov 1 01:56:07.163624 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 155 Nov 1 01:56:07.163632 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 155 Nov 1 01:56:07.163638 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 155 Nov 1 01:56:07.163645 kernel: mlx5_core 0000:01:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Nov 1 01:56:07.163710 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 155 Nov 1 01:56:07.163718 kernel: mlx5_core 0000:01:00.1: firmware version: 14.31.1014 Nov 1 01:56:07.740077 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 155 Nov 1 01:56:07.740096 kernel: hub 1-14:1.0: USB hub found Nov 1 01:56:07.740197 kernel: hub 1-14:1.0: 4 ports detected Nov 1 01:56:07.740259 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Nov 1 01:56:07.740313 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 155 Nov 1 01:56:07.740321 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 155 Nov 1 01:56:07.740331 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Nov 1 01:56:07.740477 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Nov 1 01:56:07.740534 kernel: port_module: 9 callbacks suppressed Nov 1 01:56:07.740542 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Nov 1 01:56:07.740595 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 1 01:56:07.740603 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Nov 1 01:56:07.740653 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 1 01:56:07.740660 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 1 01:56:07.740667 kernel: ata7: SATA link down (SStatus 0 SControl 300) Nov 1 01:56:07.740674 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Nov 1 01:56:07.740681 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 1 01:56:07.740687 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Nov 1 01:56:07.740693 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 1 01:56:07.740699 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Nov 1 01:56:07.740706 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Nov 1 01:56:07.740712 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Nov 1 01:56:07.740718 kernel: ata2.00: Features: NCQ-prio Nov 1 01:56:07.740725 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Nov 1 01:56:07.740732 kernel: ata1.00: Features: NCQ-prio Nov 1 01:56:07.740738 kernel: ata2.00: configured for UDMA/133 Nov 1 01:56:07.740745 kernel: mlx5_core 0000:01:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Nov 1 01:56:07.740796 kernel: ata1.00: configured for UDMA/133 Nov 1 01:56:07.745394 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Nov 1 01:56:07.763417 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Nov 1 01:56:07.799332 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Nov 1 01:56:07.828076 kernel: usbcore: registered new interface driver usbhid Nov 1 01:56:07.828124 kernel: usbhid: USB HID core driver Nov 1 01:56:07.845387 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Nov 1 01:56:07.845487 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Nov 1 01:56:07.876331 kernel: ata2.00: Enabling discard_zeroes_data Nov 1 01:56:07.890805 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 01:56:07.905310 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Nov 1 01:56:08.207188 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Nov 1 01:56:08.329717 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Nov 1 01:56:08.329881 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Nov 1 01:56:08.330016 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Nov 1 01:56:08.330031 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Nov 1 01:56:08.330145 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Nov 1 01:56:08.330216 kernel: sd 1:0:0:0: [sda] Write Protect is off Nov 1 01:56:08.330279 kernel: sd 0:0:0:0: [sdb] Write Protect is off Nov 1 01:56:08.330347 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Nov 1 01:56:08.330411 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Nov 1 01:56:08.330476 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 1 01:56:08.330536 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 1 01:56:08.330633 kernel: ata2.00: Enabling discard_zeroes_data Nov 1 01:56:08.330646 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 01:56:08.330658 kernel: ata2.00: Enabling discard_zeroes_data Nov 1 01:56:08.330670 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Nov 1 01:56:08.330772 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 01:56:08.330788 kernel: GPT:9289727 != 937703087 Nov 1 01:56:08.330801 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 01:56:08.330812 kernel: GPT:9289727 != 937703087 Nov 1 01:56:08.330823 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 01:56:08.330835 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 1 01:56:08.330847 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 01:56:08.330859 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Nov 1 01:56:08.360353 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Nov 1 01:56:08.391510 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sdb6 scanned by (udev-worker) (674) Nov 1 01:56:08.391464 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Nov 1 01:56:08.415630 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Nov 1 01:56:08.448722 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Nov 1 01:56:08.464826 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 01:56:08.500238 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 01:56:08.477496 systemd[1]: Starting disk-uuid.service... Nov 1 01:56:08.516493 disk-uuid[693]: Primary Header is updated. Nov 1 01:56:08.516493 disk-uuid[693]: Secondary Entries is updated. Nov 1 01:56:08.516493 disk-uuid[693]: Secondary Header is updated. Nov 1 01:56:08.567449 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 1 01:56:08.567460 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 01:56:08.567467 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 1 01:56:08.567474 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 01:56:08.592388 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 1 01:56:09.572841 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 01:56:09.591078 disk-uuid[694]: The operation has completed successfully. Nov 1 01:56:09.600592 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 1 01:56:09.631277 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 01:56:09.723515 kernel: audit: type=1130 audit(1761962169.639:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:09.723531 kernel: audit: type=1131 audit(1761962169.639:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:09.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:09.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:09.631323 systemd[1]: Finished disk-uuid.service. Nov 1 01:56:09.751367 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 1 01:56:09.640048 systemd[1]: Starting verity-setup.service... Nov 1 01:56:09.794562 systemd[1]: Found device dev-mapper-usr.device. Nov 1 01:56:09.806863 systemd[1]: Mounting sysusr-usr.mount... Nov 1 01:56:09.817993 systemd[1]: Finished verity-setup.service. Nov 1 01:56:09.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:09.883336 kernel: audit: type=1130 audit(1761962169.833:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:09.939188 systemd[1]: Mounted sysusr-usr.mount. Nov 1 01:56:09.955444 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Nov 1 01:56:09.947639 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Nov 1 01:56:10.033411 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Nov 1 01:56:10.033427 kernel: BTRFS info (device sdb6): using free space tree Nov 1 01:56:10.033435 kernel: BTRFS info (device sdb6): has skinny extents Nov 1 01:56:10.033442 kernel: BTRFS info (device sdb6): enabling ssd optimizations Nov 1 01:56:09.948040 systemd[1]: Starting ignition-setup.service... Nov 1 01:56:09.968872 systemd[1]: Starting parse-ip-for-networkd.service... Nov 1 01:56:10.104351 kernel: audit: type=1130 audit(1761962170.059:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:10.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:10.042799 systemd[1]: Finished ignition-setup.service. Nov 1 01:56:10.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:10.059682 systemd[1]: Finished parse-ip-for-networkd.service. Nov 1 01:56:10.188476 kernel: audit: type=1130 audit(1761962170.112:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:10.188491 kernel: audit: type=1334 audit(1761962170.167:24): prog-id=9 op=LOAD Nov 1 01:56:10.167000 audit: BPF prog-id=9 op=LOAD Nov 1 01:56:10.112868 systemd[1]: Starting ignition-fetch-offline.service... Nov 1 01:56:10.168269 systemd[1]: Starting systemd-networkd.service... Nov 1 01:56:10.252406 kernel: audit: type=1130 audit(1761962170.205:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:10.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:10.203559 systemd-networkd[877]: lo: Link UP Nov 1 01:56:10.237915 ignition[871]: Ignition 2.14.0 Nov 1 01:56:10.203561 systemd-networkd[877]: lo: Gained carrier Nov 1 01:56:10.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:10.237920 ignition[871]: Stage: fetch-offline Nov 1 01:56:10.402452 kernel: audit: type=1130 audit(1761962170.281:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:10.402469 kernel: audit: type=1130 audit(1761962170.336:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:10.402477 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Nov 1 01:56:10.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:10.203915 systemd-networkd[877]: Enumeration completed Nov 1 01:56:10.433598 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f1np1: link becomes ready Nov 1 01:56:10.237950 ignition[871]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 01:56:10.204000 systemd[1]: Started systemd-networkd.service. Nov 1 01:56:10.237964 ignition[871]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Nov 1 01:56:10.204656 systemd-networkd[877]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 01:56:10.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:10.246691 ignition[871]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:56:10.495554 iscsid[903]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Nov 1 01:56:10.495554 iscsid[903]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Nov 1 01:56:10.495554 iscsid[903]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Nov 1 01:56:10.495554 iscsid[903]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Nov 1 01:56:10.495554 iscsid[903]: If using hardware iscsi like qla4xxx this message can be ignored. Nov 1 01:56:10.495554 iscsid[903]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Nov 1 01:56:10.495554 iscsid[903]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Nov 1 01:56:10.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:10.205530 systemd[1]: Reached target network.target. Nov 1 01:56:10.665520 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Nov 1 01:56:10.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:10.246761 ignition[871]: parsed url from cmdline: "" Nov 1 01:56:10.250957 unknown[871]: fetched base config from "system" Nov 1 01:56:10.246763 ignition[871]: no config URL provided Nov 1 01:56:10.250963 unknown[871]: fetched user config from "system" Nov 1 01:56:10.246765 ignition[871]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 01:56:10.260888 systemd[1]: Starting iscsiuio.service... Nov 1 01:56:10.246788 ignition[871]: parsing config with SHA512: a2a10970e7086588580650c1de0497cbb13a8bf043f7efefb7ae9787bad07ebf78e3270f760da45915583695f46dc218912c70b8d0ed17c9f98333cc933d9317 Nov 1 01:56:10.275645 systemd[1]: Started iscsiuio.service. Nov 1 01:56:10.251287 ignition[871]: fetch-offline: fetch-offline passed Nov 1 01:56:10.281626 systemd[1]: Finished ignition-fetch-offline.service. Nov 1 01:56:10.251291 ignition[871]: POST message to Packet Timeline Nov 1 01:56:10.336584 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 1 01:56:10.251297 ignition[871]: POST Status error: resource requires networking Nov 1 01:56:10.337044 systemd[1]: Starting ignition-kargs.service... Nov 1 01:56:10.251348 ignition[871]: Ignition finished successfully Nov 1 01:56:10.405321 systemd-networkd[877]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 01:56:10.407885 ignition[893]: Ignition 2.14.0 Nov 1 01:56:10.416869 systemd[1]: Starting iscsid.service... Nov 1 01:56:10.407897 ignition[893]: Stage: kargs Nov 1 01:56:10.441465 systemd[1]: Started iscsid.service. Nov 1 01:56:10.408031 ignition[893]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 01:56:10.467581 systemd[1]: Starting dracut-initqueue.service... Nov 1 01:56:10.408058 ignition[893]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Nov 1 01:56:10.497873 systemd[1]: Finished dracut-initqueue.service. Nov 1 01:56:10.410667 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:56:10.516721 systemd[1]: Reached target remote-fs-pre.target. Nov 1 01:56:10.412654 ignition[893]: kargs: kargs passed Nov 1 01:56:10.550538 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 01:56:10.412657 ignition[893]: POST message to Packet Timeline Nov 1 01:56:10.571619 systemd[1]: Reached target remote-fs.target. Nov 1 01:56:10.412668 ignition[893]: GET https://metadata.packet.net/metadata: attempt #1 Nov 1 01:56:10.588430 systemd[1]: Starting dracut-pre-mount.service... Nov 1 01:56:10.416536 ignition[893]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:57514->[::1]:53: read: connection refused Nov 1 01:56:10.627714 systemd[1]: Finished dracut-pre-mount.service. Nov 1 01:56:10.617107 ignition[893]: GET https://metadata.packet.net/metadata: attempt #2 Nov 1 01:56:10.662948 systemd-networkd[877]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 01:56:10.617430 ignition[893]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:60090->[::1]:53: read: connection refused Nov 1 01:56:10.691692 systemd-networkd[877]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 01:56:10.719950 systemd-networkd[877]: enp1s0f1np1: Link UP Nov 1 01:56:10.720167 systemd-networkd[877]: enp1s0f1np1: Gained carrier Nov 1 01:56:10.729716 systemd-networkd[877]: enp1s0f0np0: Link UP Nov 1 01:56:10.729985 systemd-networkd[877]: eno2: Link UP Nov 1 01:56:10.730237 systemd-networkd[877]: eno1: Link UP Nov 1 01:56:11.018121 ignition[893]: GET https://metadata.packet.net/metadata: attempt #3 Nov 1 01:56:11.019263 ignition[893]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:36438->[::1]:53: read: connection refused Nov 1 01:56:11.426667 systemd-networkd[877]: enp1s0f0np0: Gained carrier Nov 1 01:56:11.436580 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f0np0: link becomes ready Nov 1 01:56:11.457515 systemd-networkd[877]: enp1s0f0np0: DHCPv4 address 139.178.90.71/31, gateway 139.178.90.70 acquired from 145.40.83.140 Nov 1 01:56:11.819662 ignition[893]: GET https://metadata.packet.net/metadata: attempt #4 Nov 1 01:56:11.821115 ignition[893]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:44107->[::1]:53: read: connection refused Nov 1 01:56:12.173815 systemd-networkd[877]: enp1s0f1np1: Gained IPv6LL Nov 1 01:56:12.493814 systemd-networkd[877]: enp1s0f0np0: Gained IPv6LL Nov 1 01:56:13.422688 ignition[893]: GET https://metadata.packet.net/metadata: attempt #5 Nov 1 01:56:13.423960 ignition[893]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:32955->[::1]:53: read: connection refused Nov 1 01:56:16.627411 ignition[893]: GET https://metadata.packet.net/metadata: attempt #6 Nov 1 01:56:17.784612 ignition[893]: GET result: OK Nov 1 01:56:18.225919 ignition[893]: Ignition finished successfully Nov 1 01:56:18.230674 systemd[1]: Finished ignition-kargs.service. Nov 1 01:56:18.313605 kernel: kauditd_printk_skb: 3 callbacks suppressed Nov 1 01:56:18.313640 kernel: audit: type=1130 audit(1761962178.242:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:18.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:18.251403 ignition[922]: Ignition 2.14.0 Nov 1 01:56:18.244749 systemd[1]: Starting ignition-disks.service... Nov 1 01:56:18.251428 ignition[922]: Stage: disks Nov 1 01:56:18.251507 ignition[922]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 01:56:18.251516 ignition[922]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Nov 1 01:56:18.252910 ignition[922]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:56:18.254182 ignition[922]: disks: disks passed Nov 1 01:56:18.254185 ignition[922]: POST message to Packet Timeline Nov 1 01:56:18.254196 ignition[922]: GET https://metadata.packet.net/metadata: attempt #1 Nov 1 01:56:19.326279 ignition[922]: GET result: OK Nov 1 01:56:19.812353 ignition[922]: Ignition finished successfully Nov 1 01:56:19.815711 systemd[1]: Finished ignition-disks.service. Nov 1 01:56:19.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:19.828903 systemd[1]: Reached target initrd-root-device.target. Nov 1 01:56:19.904608 kernel: audit: type=1130 audit(1761962179.828:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:19.889575 systemd[1]: Reached target local-fs-pre.target. Nov 1 01:56:19.889612 systemd[1]: Reached target local-fs.target. Nov 1 01:56:19.913564 systemd[1]: Reached target sysinit.target. Nov 1 01:56:19.927550 systemd[1]: Reached target basic.target. Nov 1 01:56:19.941239 systemd[1]: Starting systemd-fsck-root.service... Nov 1 01:56:19.960229 systemd-fsck[937]: ROOT: clean, 637/553520 files, 56032/553472 blocks Nov 1 01:56:19.973737 systemd[1]: Finished systemd-fsck-root.service. Nov 1 01:56:20.061823 kernel: audit: type=1130 audit(1761962179.982:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:20.061838 kernel: EXT4-fs (sdb9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Nov 1 01:56:19.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:19.988280 systemd[1]: Mounting sysroot.mount... Nov 1 01:56:20.070014 systemd[1]: Mounted sysroot.mount. Nov 1 01:56:20.084614 systemd[1]: Reached target initrd-root-fs.target. Nov 1 01:56:20.092211 systemd[1]: Mounting sysroot-usr.mount... Nov 1 01:56:20.117174 systemd[1]: Starting flatcar-metadata-hostname.service... Nov 1 01:56:20.125851 systemd[1]: Starting flatcar-static-network.service... Nov 1 01:56:20.141437 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 01:56:20.141473 systemd[1]: Reached target ignition-diskful.target. Nov 1 01:56:20.159538 systemd[1]: Mounted sysroot-usr.mount. Nov 1 01:56:20.183780 systemd[1]: Mounting sysroot-usr-share-oem.mount... Nov 1 01:56:20.296939 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sdb6 scanned by mount (950) Nov 1 01:56:20.296955 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Nov 1 01:56:20.296970 kernel: BTRFS info (device sdb6): using free space tree Nov 1 01:56:20.296978 kernel: BTRFS info (device sdb6): has skinny extents Nov 1 01:56:20.296985 kernel: BTRFS info (device sdb6): enabling ssd optimizations Nov 1 01:56:20.196691 systemd[1]: Starting initrd-setup-root.service... Nov 1 01:56:20.381636 kernel: audit: type=1130 audit(1761962180.328:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:20.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:20.381680 coreos-metadata[944]: Nov 01 01:56:20.258 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 1 01:56:20.404588 coreos-metadata[945]: Nov 01 01:56:20.259 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 1 01:56:20.423442 initrd-setup-root[955]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 01:56:20.256801 systemd[1]: Finished initrd-setup-root.service. Nov 1 01:56:20.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:20.473528 initrd-setup-root[963]: cut: /sysroot/etc/group: No such file or directory Nov 1 01:56:20.504556 kernel: audit: type=1130 audit(1761962180.439:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:20.329688 systemd[1]: Mounted sysroot-usr-share-oem.mount. Nov 1 01:56:20.513599 initrd-setup-root[971]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 01:56:20.391836 systemd[1]: Starting ignition-mount.service... Nov 1 01:56:20.530584 initrd-setup-root[979]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 01:56:20.411992 systemd[1]: Starting sysroot-boot.service... Nov 1 01:56:20.547494 ignition[1020]: INFO : Ignition 2.14.0 Nov 1 01:56:20.547494 ignition[1020]: INFO : Stage: mount Nov 1 01:56:20.547494 ignition[1020]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 01:56:20.547494 ignition[1020]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Nov 1 01:56:20.547494 ignition[1020]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:56:20.547494 ignition[1020]: INFO : mount: mount passed Nov 1 01:56:20.547494 ignition[1020]: INFO : POST message to Packet Timeline Nov 1 01:56:20.547494 ignition[1020]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 1 01:56:20.431410 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Nov 1 01:56:20.431461 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Nov 1 01:56:20.432947 systemd[1]: Finished sysroot-boot.service. Nov 1 01:56:21.338801 coreos-metadata[944]: Nov 01 01:56:21.338 INFO Fetch successful Nov 1 01:56:21.347419 coreos-metadata[945]: Nov 01 01:56:21.342 INFO Fetch successful Nov 1 01:56:21.371946 coreos-metadata[944]: Nov 01 01:56:21.371 INFO wrote hostname ci-3510.3.8-n-0f05b56927 to /sysroot/etc/hostname Nov 1 01:56:21.372474 systemd[1]: Finished flatcar-metadata-hostname.service. Nov 1 01:56:21.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:21.416538 systemd[1]: flatcar-static-network.service: Deactivated successfully. Nov 1 01:56:21.490415 kernel: audit: type=1130 audit(1761962181.395:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:21.490430 kernel: audit: type=1130 audit(1761962181.460:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:21.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:21.416578 systemd[1]: Finished flatcar-static-network.service. Nov 1 01:56:21.583578 kernel: audit: type=1131 audit(1761962181.460:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:21.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:21.583611 ignition[1020]: INFO : GET result: OK Nov 1 01:56:21.933043 ignition[1020]: INFO : Ignition finished successfully Nov 1 01:56:21.936430 systemd[1]: Finished ignition-mount.service. Nov 1 01:56:21.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:21.951689 systemd[1]: Starting ignition-files.service... Nov 1 01:56:22.023566 kernel: audit: type=1130 audit(1761962181.949:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:22.018515 systemd[1]: Mounting sysroot-usr-share-oem.mount... Nov 1 01:56:22.083044 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sdb6 scanned by mount (1034) Nov 1 01:56:22.083060 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Nov 1 01:56:22.083070 kernel: BTRFS info (device sdb6): using free space tree Nov 1 01:56:22.106742 kernel: BTRFS info (device sdb6): has skinny extents Nov 1 01:56:22.156329 kernel: BTRFS info (device sdb6): enabling ssd optimizations Nov 1 01:56:22.157575 systemd[1]: Mounted sysroot-usr-share-oem.mount. Nov 1 01:56:22.174483 ignition[1053]: INFO : Ignition 2.14.0 Nov 1 01:56:22.174483 ignition[1053]: INFO : Stage: files Nov 1 01:56:22.174483 ignition[1053]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 01:56:22.174483 ignition[1053]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Nov 1 01:56:22.174483 ignition[1053]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:56:22.178075 unknown[1053]: wrote ssh authorized keys file for user: core Nov 1 01:56:22.238504 ignition[1053]: DEBUG : files: compiled without relabeling support, skipping Nov 1 01:56:22.238504 ignition[1053]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 01:56:22.238504 ignition[1053]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 01:56:22.238504 ignition[1053]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 01:56:22.238504 ignition[1053]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 01:56:22.238504 ignition[1053]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 01:56:22.238504 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 1 01:56:22.238504 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 1 01:56:22.238504 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 01:56:22.238504 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 1 01:56:22.238504 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 1 01:56:22.382412 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 01:56:22.382412 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 1 01:56:22.382412 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 01:56:22.382412 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 01:56:22.382412 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 01:56:22.382412 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 01:56:22.382412 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 01:56:22.382412 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 01:56:22.382412 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 01:56:22.382412 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 01:56:22.382412 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 01:56:22.382412 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 01:56:22.382412 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 01:56:22.382412 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Nov 1 01:56:22.382412 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Nov 1 01:56:22.373001 systemd[1]: mnt-oem1632312300.mount: Deactivated successfully. Nov 1 01:56:22.645602 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1632312300" Nov 1 01:56:22.645602 ignition[1053]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1632312300": device or resource busy Nov 1 01:56:22.645602 ignition[1053]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1632312300", trying btrfs: device or resource busy Nov 1 01:56:22.645602 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1632312300" Nov 1 01:56:22.645602 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1632312300" Nov 1 01:56:22.645602 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem1632312300" Nov 1 01:56:22.645602 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem1632312300" Nov 1 01:56:22.645602 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Nov 1 01:56:22.645602 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 01:56:22.645602 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 1 01:56:22.859300 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET result: OK Nov 1 01:56:23.063750 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 01:56:23.063750 ignition[1053]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Nov 1 01:56:23.063750 ignition[1053]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Nov 1 01:56:23.063750 ignition[1053]: INFO : files: op(11): [started] processing unit "packet-phone-home.service" Nov 1 01:56:23.063750 ignition[1053]: INFO : files: op(11): [finished] processing unit "packet-phone-home.service" Nov 1 01:56:23.063750 ignition[1053]: INFO : files: op(12): [started] processing unit "containerd.service" Nov 1 01:56:23.145602 ignition[1053]: INFO : files: op(12): op(13): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 1 01:56:23.145602 ignition[1053]: INFO : files: op(12): op(13): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 1 01:56:23.145602 ignition[1053]: INFO : files: op(12): [finished] processing unit "containerd.service" Nov 1 01:56:23.145602 ignition[1053]: INFO : files: op(14): [started] processing unit "prepare-helm.service" Nov 1 01:56:23.145602 ignition[1053]: INFO : files: op(14): op(15): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 01:56:23.145602 ignition[1053]: INFO : files: op(14): op(15): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 01:56:23.145602 ignition[1053]: INFO : files: op(14): [finished] processing unit "prepare-helm.service" Nov 1 01:56:23.145602 ignition[1053]: INFO : files: op(16): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Nov 1 01:56:23.145602 ignition[1053]: INFO : files: op(16): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Nov 1 01:56:23.145602 ignition[1053]: INFO : files: op(17): [started] setting preset to enabled for "packet-phone-home.service" Nov 1 01:56:23.145602 ignition[1053]: INFO : files: op(17): [finished] setting preset to enabled for "packet-phone-home.service" Nov 1 01:56:23.145602 ignition[1053]: INFO : files: op(18): [started] setting preset to enabled for "prepare-helm.service" Nov 1 01:56:23.145602 ignition[1053]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 01:56:23.145602 ignition[1053]: INFO : files: createResultFile: createFiles: op(19): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 01:56:23.145602 ignition[1053]: INFO : files: createResultFile: createFiles: op(19): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 01:56:23.145602 ignition[1053]: INFO : files: files passed Nov 1 01:56:23.145602 ignition[1053]: INFO : POST message to Packet Timeline Nov 1 01:56:23.145602 ignition[1053]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 1 01:56:24.053191 ignition[1053]: INFO : GET result: OK Nov 1 01:56:24.461579 ignition[1053]: INFO : Ignition finished successfully Nov 1 01:56:24.483070 systemd[1]: Finished ignition-files.service. Nov 1 01:56:24.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:24.498020 systemd[1]: Starting initrd-setup-root-after-ignition.service... Nov 1 01:56:24.570604 kernel: audit: type=1130 audit(1761962184.492:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:24.560634 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Nov 1 01:56:24.595736 initrd-setup-root-after-ignition[1086]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 01:56:24.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:24.560959 systemd[1]: Starting ignition-quench.service... Nov 1 01:56:24.788001 kernel: audit: type=1130 audit(1761962184.606:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:24.788019 kernel: audit: type=1130 audit(1761962184.674:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:24.788027 kernel: audit: type=1131 audit(1761962184.674:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:24.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:24.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:24.578694 systemd[1]: Finished initrd-setup-root-after-ignition.service. Nov 1 01:56:24.607366 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 01:56:24.607609 systemd[1]: Finished ignition-quench.service. Nov 1 01:56:24.950560 kernel: audit: type=1130 audit(1761962184.829:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:24.950572 kernel: audit: type=1131 audit(1761962184.829:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:24.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:24.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:24.674606 systemd[1]: Reached target ignition-complete.target. Nov 1 01:56:24.796928 systemd[1]: Starting initrd-parse-etc.service... Nov 1 01:56:24.810815 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 01:56:24.810859 systemd[1]: Finished initrd-parse-etc.service. Nov 1 01:56:25.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:24.829729 systemd[1]: Reached target initrd-fs.target. Nov 1 01:56:25.077561 kernel: audit: type=1130 audit(1761962185.006:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:24.959615 systemd[1]: Reached target initrd.target. Nov 1 01:56:24.975597 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Nov 1 01:56:24.976185 systemd[1]: Starting dracut-pre-pivot.service... Nov 1 01:56:24.990768 systemd[1]: Finished dracut-pre-pivot.service. Nov 1 01:56:25.007604 systemd[1]: Starting initrd-cleanup.service... Nov 1 01:56:25.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:25.073605 systemd[1]: Stopped target nss-lookup.target. Nov 1 01:56:25.223516 kernel: audit: type=1131 audit(1761962185.143:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:25.086668 systemd[1]: Stopped target remote-cryptsetup.target. Nov 1 01:56:25.111677 systemd[1]: Stopped target timers.target. Nov 1 01:56:25.127865 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 01:56:25.128235 systemd[1]: Stopped dracut-pre-pivot.service. Nov 1 01:56:25.144260 systemd[1]: Stopped target initrd.target. Nov 1 01:56:25.216499 systemd[1]: Stopped target basic.target. Nov 1 01:56:25.223671 systemd[1]: Stopped target ignition-complete.target. Nov 1 01:56:25.246750 systemd[1]: Stopped target ignition-diskful.target. Nov 1 01:56:25.262709 systemd[1]: Stopped target initrd-root-device.target. Nov 1 01:56:25.279742 systemd[1]: Stopped target remote-fs.target. Nov 1 01:56:25.295785 systemd[1]: Stopped target remote-fs-pre.target. Nov 1 01:56:25.396000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:25.313036 systemd[1]: Stopped target sysinit.target. Nov 1 01:56:25.479576 kernel: audit: type=1131 audit(1761962185.396:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:25.329034 systemd[1]: Stopped target local-fs.target. Nov 1 01:56:25.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:25.347037 systemd[1]: Stopped target local-fs-pre.target. Nov 1 01:56:25.565580 kernel: audit: type=1131 audit(1761962185.489:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:25.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:25.364013 systemd[1]: Stopped target swap.target. Nov 1 01:56:25.379903 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 01:56:25.380280 systemd[1]: Stopped dracut-pre-mount.service. Nov 1 01:56:25.397384 systemd[1]: Stopped target cryptsetup.target. Nov 1 01:56:25.472718 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 01:56:25.472800 systemd[1]: Stopped dracut-initqueue.service. Nov 1 01:56:25.489822 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 01:56:25.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:25.489971 systemd[1]: Stopped ignition-fetch-offline.service. Nov 1 01:56:25.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:25.558803 systemd[1]: Stopped target paths.target. Nov 1 01:56:25.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:25.572582 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 01:56:25.721559 ignition[1102]: INFO : Ignition 2.14.0 Nov 1 01:56:25.721559 ignition[1102]: INFO : Stage: umount Nov 1 01:56:25.721559 ignition[1102]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 01:56:25.721559 ignition[1102]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Nov 1 01:56:25.721559 ignition[1102]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:56:25.721559 ignition[1102]: INFO : umount: umount passed Nov 1 01:56:25.721559 ignition[1102]: INFO : POST message to Packet Timeline Nov 1 01:56:25.721559 ignition[1102]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 1 01:56:25.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:25.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:25.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:25.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:25.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:25.838000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:25.576571 systemd[1]: Stopped systemd-ask-password-console.path. Nov 1 01:56:25.579733 systemd[1]: Stopped target slices.target. Nov 1 01:56:25.595718 systemd[1]: Stopped target sockets.target. Nov 1 01:56:25.618005 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 01:56:25.618262 systemd[1]: Closed iscsid.socket. Nov 1 01:56:25.632057 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 01:56:25.632321 systemd[1]: Closed iscsiuio.socket. Nov 1 01:56:25.647112 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 01:56:25.647526 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Nov 1 01:56:25.664140 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 01:56:25.664531 systemd[1]: Stopped ignition-files.service. Nov 1 01:56:25.682146 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 1 01:56:25.682553 systemd[1]: Stopped flatcar-metadata-hostname.service. Nov 1 01:56:25.700322 systemd[1]: Stopping ignition-mount.service... Nov 1 01:56:25.711435 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 01:56:25.711602 systemd[1]: Stopped kmod-static-nodes.service. Nov 1 01:56:25.730229 systemd[1]: Stopping sysroot-boot.service... Nov 1 01:56:25.737524 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 01:56:25.737645 systemd[1]: Stopped systemd-udev-trigger.service. Nov 1 01:56:25.769118 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 01:56:25.769472 systemd[1]: Stopped dracut-pre-trigger.service. Nov 1 01:56:25.805886 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 01:56:25.807842 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 01:56:25.808084 systemd[1]: Stopped sysroot-boot.service. Nov 1 01:56:25.826354 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 01:56:25.826587 systemd[1]: Finished initrd-cleanup.service. Nov 1 01:56:27.020502 ignition[1102]: INFO : GET result: OK Nov 1 01:56:27.427568 ignition[1102]: INFO : Ignition finished successfully Nov 1 01:56:27.430451 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 01:56:27.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:27.430710 systemd[1]: Stopped ignition-mount.service. Nov 1 01:56:27.444794 systemd[1]: Stopped target network.target. Nov 1 01:56:27.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:27.460579 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 01:56:27.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:27.460743 systemd[1]: Stopped ignition-disks.service. Nov 1 01:56:27.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:27.475694 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 01:56:27.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:27.475827 systemd[1]: Stopped ignition-kargs.service. Nov 1 01:56:27.490662 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 01:56:27.490802 systemd[1]: Stopped ignition-setup.service. Nov 1 01:56:27.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:27.506849 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 01:56:27.584000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:27.586000 audit: BPF prog-id=6 op=UNLOAD Nov 1 01:56:27.507003 systemd[1]: Stopped initrd-setup-root.service. Nov 1 01:56:27.522132 systemd[1]: Stopping systemd-networkd.service... Nov 1 01:56:27.531494 systemd-networkd[877]: enp1s0f1np1: DHCPv6 lease lost Nov 1 01:56:27.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:27.536792 systemd[1]: Stopping systemd-resolved.service... Nov 1 01:56:27.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:27.540483 systemd-networkd[877]: enp1s0f0np0: DHCPv6 lease lost Nov 1 01:56:27.657000 audit: BPF prog-id=9 op=UNLOAD Nov 1 01:56:27.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:27.552175 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 01:56:27.552468 systemd[1]: Stopped systemd-resolved.service. Nov 1 01:56:27.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:27.570107 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 01:56:27.570389 systemd[1]: Stopped systemd-networkd.service. Nov 1 01:56:27.585069 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 01:56:27.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:27.585165 systemd[1]: Closed systemd-networkd.socket. Nov 1 01:56:27.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:27.603878 systemd[1]: Stopping network-cleanup.service... Nov 1 01:56:27.778000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:27.611555 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 01:56:27.611588 systemd[1]: Stopped parse-ip-for-networkd.service. Nov 1 01:56:27.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:27.631585 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 01:56:27.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:27.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:27.631645 systemd[1]: Stopped systemd-sysctl.service. Nov 1 01:56:27.648849 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 01:56:27.648939 systemd[1]: Stopped systemd-modules-load.service. Nov 1 01:56:27.666085 systemd[1]: Stopping systemd-udevd.service... Nov 1 01:56:27.686449 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 1 01:56:27.687391 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 01:56:27.687456 systemd[1]: Stopped systemd-udevd.service. Nov 1 01:56:27.692725 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 01:56:27.692753 systemd[1]: Closed systemd-udevd-control.socket. Nov 1 01:56:27.713565 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 01:56:27.713606 systemd[1]: Closed systemd-udevd-kernel.socket. Nov 1 01:56:27.729542 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 01:56:27.729617 systemd[1]: Stopped dracut-pre-udev.service. Nov 1 01:56:27.744766 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 01:56:27.744874 systemd[1]: Stopped dracut-cmdline.service. Nov 1 01:56:27.762492 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 01:56:27.762527 systemd[1]: Stopped dracut-cmdline-ask.service. Nov 1 01:56:27.779121 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Nov 1 01:56:27.795482 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 01:56:27.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:27.795574 systemd[1]: Stopped systemd-vconsole-setup.service. Nov 1 01:56:27.814765 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 01:56:27.815063 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Nov 1 01:56:28.039000 audit: BPF prog-id=8 op=UNLOAD Nov 1 01:56:28.039000 audit: BPF prog-id=7 op=UNLOAD Nov 1 01:56:28.041000 audit: BPF prog-id=5 op=UNLOAD Nov 1 01:56:28.041000 audit: BPF prog-id=4 op=UNLOAD Nov 1 01:56:28.041000 audit: BPF prog-id=3 op=UNLOAD Nov 1 01:56:27.982568 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 01:56:27.982825 systemd[1]: Stopped network-cleanup.service. Nov 1 01:56:28.085803 iscsid[903]: iscsid shutting down. Nov 1 01:56:27.996991 systemd[1]: Reached target initrd-switch-root.target. Nov 1 01:56:28.016550 systemd[1]: Starting initrd-switch-root.service... Nov 1 01:56:28.039998 systemd[1]: Switching root. Nov 1 01:56:28.086078 systemd-journald[268]: Journal stopped Nov 1 01:56:31.945055 systemd-journald[268]: Received SIGTERM from PID 1 (systemd). Nov 1 01:56:31.945069 kernel: SELinux: Class mctp_socket not defined in policy. Nov 1 01:56:31.945077 kernel: SELinux: Class anon_inode not defined in policy. Nov 1 01:56:31.945083 kernel: SELinux: the above unknown classes and permissions will be allowed Nov 1 01:56:31.945089 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 01:56:31.945094 kernel: SELinux: policy capability open_perms=1 Nov 1 01:56:31.945100 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 01:56:31.945106 kernel: SELinux: policy capability always_check_network=0 Nov 1 01:56:31.945112 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 01:56:31.945118 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 01:56:31.945124 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 01:56:31.945129 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 01:56:31.945135 systemd[1]: Successfully loaded SELinux policy in 317.343ms. Nov 1 01:56:31.945142 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.293ms. Nov 1 01:56:31.945151 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 01:56:31.945157 systemd[1]: Detected architecture x86-64. Nov 1 01:56:31.945164 systemd[1]: Detected first boot. Nov 1 01:56:31.945170 systemd[1]: Hostname set to . Nov 1 01:56:31.945176 systemd[1]: Initializing machine ID from random generator. Nov 1 01:56:31.945182 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Nov 1 01:56:31.945188 systemd[1]: Populated /etc with preset unit settings. Nov 1 01:56:31.945196 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 01:56:31.945202 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 01:56:31.945209 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 01:56:31.945216 systemd[1]: Queued start job for default target multi-user.target. Nov 1 01:56:31.945222 systemd[1]: Unnecessary job was removed for dev-sdb6.device. Nov 1 01:56:31.945228 systemd[1]: Created slice system-addon\x2dconfig.slice. Nov 1 01:56:31.945236 systemd[1]: Created slice system-addon\x2drun.slice. Nov 1 01:56:31.945243 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Nov 1 01:56:31.945250 systemd[1]: Created slice system-getty.slice. Nov 1 01:56:31.945256 systemd[1]: Created slice system-modprobe.slice. Nov 1 01:56:31.945262 systemd[1]: Created slice system-serial\x2dgetty.slice. Nov 1 01:56:31.945269 systemd[1]: Created slice system-system\x2dcloudinit.slice. Nov 1 01:56:31.945275 systemd[1]: Created slice system-systemd\x2dfsck.slice. Nov 1 01:56:31.945281 systemd[1]: Created slice user.slice. Nov 1 01:56:31.945288 systemd[1]: Started systemd-ask-password-console.path. Nov 1 01:56:31.945295 systemd[1]: Started systemd-ask-password-wall.path. Nov 1 01:56:31.945301 systemd[1]: Set up automount boot.automount. Nov 1 01:56:31.945308 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Nov 1 01:56:31.945314 systemd[1]: Reached target integritysetup.target. Nov 1 01:56:31.945322 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 01:56:31.945333 systemd[1]: Reached target remote-fs.target. Nov 1 01:56:31.945340 systemd[1]: Reached target slices.target. Nov 1 01:56:31.945366 systemd[1]: Reached target swap.target. Nov 1 01:56:31.945374 systemd[1]: Reached target torcx.target. Nov 1 01:56:31.945394 systemd[1]: Reached target veritysetup.target. Nov 1 01:56:31.945401 systemd[1]: Listening on systemd-coredump.socket. Nov 1 01:56:31.945407 systemd[1]: Listening on systemd-initctl.socket. Nov 1 01:56:31.945414 systemd[1]: Listening on systemd-journald-audit.socket. Nov 1 01:56:31.945421 kernel: kauditd_printk_skb: 47 callbacks suppressed Nov 1 01:56:31.945427 kernel: audit: type=1400 audit(1761962191.187:90): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 01:56:31.945435 kernel: audit: type=1335 audit(1761962191.187:91): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Nov 1 01:56:31.945441 systemd[1]: Listening on systemd-journald-dev-log.socket. Nov 1 01:56:31.945448 systemd[1]: Listening on systemd-journald.socket. Nov 1 01:56:31.945454 systemd[1]: Listening on systemd-networkd.socket. Nov 1 01:56:31.945461 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 01:56:31.945468 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 01:56:31.945476 systemd[1]: Listening on systemd-userdbd.socket. Nov 1 01:56:31.945482 systemd[1]: Mounting dev-hugepages.mount... Nov 1 01:56:31.945489 systemd[1]: Mounting dev-mqueue.mount... Nov 1 01:56:31.945496 systemd[1]: Mounting media.mount... Nov 1 01:56:31.945502 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:56:31.945509 systemd[1]: Mounting sys-kernel-debug.mount... Nov 1 01:56:31.945516 systemd[1]: Mounting sys-kernel-tracing.mount... Nov 1 01:56:31.945522 systemd[1]: Mounting tmp.mount... Nov 1 01:56:31.945530 systemd[1]: Starting flatcar-tmpfiles.service... Nov 1 01:56:31.945537 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 01:56:31.945543 systemd[1]: Starting kmod-static-nodes.service... Nov 1 01:56:31.945550 systemd[1]: Starting modprobe@configfs.service... Nov 1 01:56:31.945557 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 01:56:31.945564 systemd[1]: Starting modprobe@drm.service... Nov 1 01:56:31.945570 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 01:56:31.945578 systemd[1]: Starting modprobe@fuse.service... Nov 1 01:56:31.945585 kernel: fuse: init (API version 7.34) Nov 1 01:56:31.945591 systemd[1]: Starting modprobe@loop.service... Nov 1 01:56:31.945598 kernel: loop: module loaded Nov 1 01:56:31.945604 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 01:56:31.945611 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Nov 1 01:56:31.945618 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Nov 1 01:56:31.945625 systemd[1]: Starting systemd-journald.service... Nov 1 01:56:31.945631 systemd[1]: Starting systemd-modules-load.service... Nov 1 01:56:31.945638 kernel: audit: type=1305 audit(1761962191.942:92): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Nov 1 01:56:31.945647 systemd-journald[1297]: Journal started Nov 1 01:56:31.945672 systemd-journald[1297]: Runtime Journal (/run/log/journal/4be8fc5da03c48ee855e1dcee3d952eb) is 8.0M, max 640.1M, 632.1M free. Nov 1 01:56:31.187000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 01:56:31.187000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Nov 1 01:56:31.942000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Nov 1 01:56:31.942000 audit[1297]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffff747a980 a2=4000 a3=7ffff747aa1c items=0 ppid=1 pid=1297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:56:31.942000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Nov 1 01:56:31.991330 kernel: audit: type=1300 audit(1761962191.942:92): arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffff747a980 a2=4000 a3=7ffff747aa1c items=0 ppid=1 pid=1297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:56:31.991344 kernel: audit: type=1327 audit(1761962191.942:92): proctitle="/usr/lib/systemd/systemd-journald" Nov 1 01:56:32.105531 systemd[1]: Starting systemd-network-generator.service... Nov 1 01:56:32.132533 systemd[1]: Starting systemd-remount-fs.service... Nov 1 01:56:32.157361 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 01:56:32.200373 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:56:32.219532 systemd[1]: Started systemd-journald.service. Nov 1 01:56:32.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:32.229137 systemd[1]: Mounted dev-hugepages.mount. Nov 1 01:56:32.277535 kernel: audit: type=1130 audit(1761962192.228:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:32.284600 systemd[1]: Mounted dev-mqueue.mount. Nov 1 01:56:32.291592 systemd[1]: Mounted media.mount. Nov 1 01:56:32.298596 systemd[1]: Mounted sys-kernel-debug.mount. Nov 1 01:56:32.307606 systemd[1]: Mounted sys-kernel-tracing.mount. Nov 1 01:56:32.316568 systemd[1]: Mounted tmp.mount. Nov 1 01:56:32.323691 systemd[1]: Finished flatcar-tmpfiles.service. Nov 1 01:56:32.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:32.332745 systemd[1]: Finished kmod-static-nodes.service. Nov 1 01:56:32.380371 kernel: audit: type=1130 audit(1761962192.332:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:32.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:32.388662 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 01:56:32.388744 systemd[1]: Finished modprobe@configfs.service. Nov 1 01:56:32.437385 kernel: audit: type=1130 audit(1761962192.388:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:32.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:32.445677 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 01:56:32.445756 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 01:56:32.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:32.496370 kernel: audit: type=1130 audit(1761962192.445:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:32.496399 kernel: audit: type=1131 audit(1761962192.445:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:32.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:32.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:32.556700 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 01:56:32.556777 systemd[1]: Finished modprobe@drm.service. Nov 1 01:56:32.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:32.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:32.565735 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 01:56:32.565812 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 01:56:32.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:32.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:32.574716 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 01:56:32.574791 systemd[1]: Finished modprobe@fuse.service. Nov 1 01:56:32.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:32.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:32.583663 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 01:56:32.583748 systemd[1]: Finished modprobe@loop.service. Nov 1 01:56:32.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:32.592000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:32.592754 systemd[1]: Finished systemd-modules-load.service. Nov 1 01:56:32.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:32.601701 systemd[1]: Finished systemd-network-generator.service. Nov 1 01:56:32.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:32.610848 systemd[1]: Finished systemd-remount-fs.service. Nov 1 01:56:32.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:32.619867 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 01:56:32.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:32.628980 systemd[1]: Reached target network-pre.target. Nov 1 01:56:32.639702 systemd[1]: Mounting sys-fs-fuse-connections.mount... Nov 1 01:56:32.648040 systemd[1]: Mounting sys-kernel-config.mount... Nov 1 01:56:32.655540 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 01:56:32.656561 systemd[1]: Starting systemd-hwdb-update.service... Nov 1 01:56:32.665065 systemd[1]: Starting systemd-journal-flush.service... Nov 1 01:56:32.668641 systemd-journald[1297]: Time spent on flushing to /var/log/journal/4be8fc5da03c48ee855e1dcee3d952eb is 14.585ms for 1516 entries. Nov 1 01:56:32.668641 systemd-journald[1297]: System Journal (/var/log/journal/4be8fc5da03c48ee855e1dcee3d952eb) is 8.0M, max 195.6M, 187.6M free. Nov 1 01:56:32.709563 systemd-journald[1297]: Received client request to flush runtime journal. Nov 1 01:56:32.681469 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 01:56:32.682011 systemd[1]: Starting systemd-random-seed.service... Nov 1 01:56:32.699495 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 01:56:32.700095 systemd[1]: Starting systemd-sysctl.service... Nov 1 01:56:32.708021 systemd[1]: Starting systemd-sysusers.service... Nov 1 01:56:32.716164 systemd[1]: Starting systemd-udev-settle.service... Nov 1 01:56:32.724791 systemd[1]: Mounted sys-fs-fuse-connections.mount. Nov 1 01:56:32.734521 systemd[1]: Mounted sys-kernel-config.mount. Nov 1 01:56:32.743605 systemd[1]: Finished systemd-journal-flush.service. Nov 1 01:56:32.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:32.752620 systemd[1]: Finished systemd-random-seed.service. Nov 1 01:56:32.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:32.760566 systemd[1]: Finished systemd-sysctl.service. Nov 1 01:56:32.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:32.768568 systemd[1]: Finished systemd-sysusers.service. Nov 1 01:56:32.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:32.777510 systemd[1]: Reached target first-boot-complete.target. Nov 1 01:56:32.786146 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Nov 1 01:56:32.794710 udevadm[1324]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 1 01:56:32.803863 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Nov 1 01:56:32.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:32.983554 systemd[1]: Finished systemd-hwdb-update.service. Nov 1 01:56:32.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:32.993983 systemd[1]: Starting systemd-udevd.service... Nov 1 01:56:33.017258 systemd-udevd[1331]: Using default interface naming scheme 'v252'. Nov 1 01:56:33.046135 systemd[1]: Started systemd-udevd.service. Nov 1 01:56:33.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:33.058853 systemd[1]: Starting systemd-networkd.service... Nov 1 01:56:33.068028 systemd[1]: Found device dev-ttyS1.device. Nov 1 01:56:33.111159 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Nov 1 01:56:33.111206 kernel: ACPI: button: Sleep Button [SLPB] Nov 1 01:56:33.136334 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 1 01:56:33.095000 audit[1346]: AVC avc: denied { confidentiality } for pid=1346 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Nov 1 01:56:33.095000 audit[1346]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=7f97ee8c4010 a1=4d9cc a2=7f97f0570bc5 a3=5 items=42 ppid=1331 pid=1346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:56:33.095000 audit: CWD cwd="/" Nov 1 01:56:33.095000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:56:33.095000 audit: PATH item=1 name=(null) inode=24007 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:56:33.095000 audit: PATH item=2 name=(null) inode=24007 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:56:33.095000 audit: PATH item=3 name=(null) inode=24008 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:56:33.095000 audit: PATH item=4 name=(null) inode=24007 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:56:33.095000 audit: PATH item=5 name=(null) inode=24009 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:56:33.095000 audit: PATH item=6 name=(null) inode=24007 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:56:33.095000 audit: PATH item=7 name=(null) inode=24010 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:56:33.095000 audit: PATH item=8 name=(null) inode=24010 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:56:33.095000 audit: PATH item=9 name=(null) inode=24011 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:56:33.095000 audit: PATH item=10 name=(null) inode=24010 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:56:33.095000 audit: PATH item=11 name=(null) inode=24012 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:56:33.095000 audit: PATH item=12 name=(null) inode=24010 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:56:33.095000 audit: PATH item=13 name=(null) inode=24013 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:56:33.095000 audit: PATH item=14 name=(null) inode=24010 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:56:33.095000 audit: PATH item=15 name=(null) inode=24014 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:56:33.095000 audit: PATH item=16 name=(null) inode=24010 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:56:33.095000 audit: PATH item=17 name=(null) inode=24015 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:56:33.095000 audit: PATH item=18 name=(null) inode=24007 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:56:33.095000 audit: PATH item=19 name=(null) inode=24016 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:56:33.095000 audit: PATH item=20 name=(null) inode=24016 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:56:33.095000 audit: PATH item=21 name=(null) inode=24017 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:56:33.095000 audit: PATH item=22 name=(null) inode=24016 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:56:33.095000 audit: PATH item=23 name=(null) inode=24018 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:56:33.095000 audit: PATH item=24 name=(null) inode=24016 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:56:33.095000 audit: PATH item=25 name=(null) inode=24019 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:56:33.095000 audit: PATH item=26 name=(null) inode=24016 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:56:33.095000 audit: PATH item=27 name=(null) inode=24020 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:56:33.095000 audit: PATH item=28 name=(null) inode=24016 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:56:33.095000 audit: PATH item=29 name=(null) inode=24021 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:56:33.095000 audit: PATH item=30 name=(null) inode=24007 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:56:33.095000 audit: PATH item=31 name=(null) inode=24022 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:56:33.095000 audit: PATH item=32 name=(null) inode=24022 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:56:33.095000 audit: PATH item=33 name=(null) inode=24023 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:56:33.095000 audit: PATH item=34 name=(null) inode=24022 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:56:33.095000 audit: PATH item=35 name=(null) inode=24024 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:56:33.095000 audit: PATH item=36 name=(null) inode=24022 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:56:33.095000 audit: PATH item=37 name=(null) inode=24025 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:56:33.095000 audit: PATH item=38 name=(null) inode=24022 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:56:33.095000 audit: PATH item=39 name=(null) inode=24026 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:56:33.095000 audit: PATH item=40 name=(null) inode=24022 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:56:33.095000 audit: PATH item=41 name=(null) inode=24027 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:56:33.095000 audit: PROCTITLE proctitle="(udev-worker)" Nov 1 01:56:33.140330 kernel: IPMI message handler: version 39.2 Nov 1 01:56:33.140354 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 01:56:33.140368 kernel: ACPI: button: Power Button [PWRF] Nov 1 01:56:33.173735 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 01:56:33.211682 systemd[1]: Starting systemd-userdbd.service... Nov 1 01:56:33.223344 kernel: ipmi device interface Nov 1 01:56:33.258338 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Nov 1 01:56:33.261368 kernel: ipmi_si: IPMI System Interface driver Nov 1 01:56:33.261385 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Nov 1 01:56:33.299629 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Nov 1 01:56:33.299725 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Nov 1 01:56:33.299814 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Nov 1 01:56:33.299906 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Nov 1 01:56:33.385899 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Nov 1 01:56:33.385916 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Nov 1 01:56:33.408834 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Nov 1 01:56:33.527847 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Nov 1 01:56:33.527948 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Nov 1 01:56:33.528049 kernel: ipmi_si: Adding ACPI-specified kcs state machine Nov 1 01:56:33.528084 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Nov 1 01:56:33.558081 systemd[1]: Started systemd-userdbd.service. Nov 1 01:56:33.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:33.599363 kernel: iTCO_vendor_support: vendor-support=0 Nov 1 01:56:33.599395 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Nov 1 01:56:33.692999 kernel: intel_rapl_common: Found RAPL domain package Nov 1 01:56:33.693040 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Nov 1 01:56:33.693150 kernel: intel_rapl_common: Found RAPL domain core Nov 1 01:56:33.732624 kernel: intel_rapl_common: Found RAPL domain dram Nov 1 01:56:33.732644 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Nov 1 01:56:33.753332 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Nov 1 01:56:33.796056 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Nov 1 01:56:33.797113 kernel: ipmi_ssif: IPMI SSIF Interface driver Nov 1 01:56:33.837392 systemd-networkd[1345]: bond0: netdev ready Nov 1 01:56:33.840292 systemd-networkd[1345]: lo: Link UP Nov 1 01:56:33.840295 systemd-networkd[1345]: lo: Gained carrier Nov 1 01:56:33.840829 systemd-networkd[1345]: Enumeration completed Nov 1 01:56:33.840929 systemd[1]: Started systemd-networkd.service. Nov 1 01:56:33.841209 systemd-networkd[1345]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Nov 1 01:56:33.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:33.849643 systemd[1]: Finished systemd-udev-settle.service. Nov 1 01:56:33.849762 systemd-networkd[1345]: enp1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:79:3d:91.network. Nov 1 01:56:33.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:33.858114 systemd[1]: Starting lvm2-activation-early.service... Nov 1 01:56:33.872916 lvm[1441]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 01:56:33.905772 systemd[1]: Finished lvm2-activation-early.service. Nov 1 01:56:33.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:33.914531 systemd[1]: Reached target cryptsetup.target. Nov 1 01:56:33.923079 systemd[1]: Starting lvm2-activation.service... Nov 1 01:56:33.925321 lvm[1444]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 01:56:33.950784 systemd[1]: Finished lvm2-activation.service. Nov 1 01:56:33.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:33.959515 systemd[1]: Reached target local-fs-pre.target. Nov 1 01:56:33.967418 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 01:56:33.967433 systemd[1]: Reached target local-fs.target. Nov 1 01:56:33.975425 systemd[1]: Reached target machines.target. Nov 1 01:56:33.984079 systemd[1]: Starting ldconfig.service... Nov 1 01:56:33.990829 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 01:56:33.990852 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 01:56:33.991436 systemd[1]: Starting systemd-boot-update.service... Nov 1 01:56:33.998866 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Nov 1 01:56:34.008987 systemd[1]: Starting systemd-machine-id-commit.service... Nov 1 01:56:34.009599 systemd[1]: Starting systemd-sysext.service... Nov 1 01:56:34.009795 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1447 (bootctl) Nov 1 01:56:34.010396 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Nov 1 01:56:34.025801 systemd[1]: Unmounting usr-share-oem.mount... Nov 1 01:56:34.030487 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Nov 1 01:56:34.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:34.030676 systemd[1]: usr-share-oem.mount: Deactivated successfully. Nov 1 01:56:34.030786 systemd[1]: Unmounted usr-share-oem.mount. Nov 1 01:56:34.079359 kernel: loop0: detected capacity change from 0 to 224512 Nov 1 01:56:34.176462 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 01:56:34.176853 systemd[1]: Finished systemd-machine-id-commit.service. Nov 1 01:56:34.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:34.211371 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 01:56:34.224290 systemd-fsck[1459]: fsck.fat 4.2 (2021-01-31) Nov 1 01:56:34.224290 systemd-fsck[1459]: /dev/sdb1: 790 files, 120773/258078 clusters Nov 1 01:56:34.225053 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Nov 1 01:56:34.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:34.236392 systemd[1]: Mounting boot.mount... Nov 1 01:56:34.256333 kernel: loop1: detected capacity change from 0 to 224512 Nov 1 01:56:34.262415 systemd[1]: Mounted boot.mount. Nov 1 01:56:34.269232 (sd-sysext)[1465]: Using extensions 'kubernetes'. Nov 1 01:56:34.269426 (sd-sysext)[1465]: Merged extensions into '/usr'. Nov 1 01:56:34.277046 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:56:34.277831 systemd[1]: Mounting usr-share-oem.mount... Nov 1 01:56:34.284560 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 01:56:34.285284 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 01:56:34.293032 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 01:56:34.299959 systemd[1]: Starting modprobe@loop.service... Nov 1 01:56:34.306463 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 01:56:34.306534 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 01:56:34.306602 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:56:34.308595 systemd[1]: Finished systemd-boot-update.service. Nov 1 01:56:34.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:34.316574 systemd[1]: Mounted usr-share-oem.mount. Nov 1 01:56:34.323588 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 01:56:34.323668 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 01:56:34.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:34.331000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:34.331616 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 01:56:34.331696 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 01:56:34.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:34.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:34.339615 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 01:56:34.339698 systemd[1]: Finished modprobe@loop.service. Nov 1 01:56:34.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:34.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:34.347653 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 01:56:34.347715 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 01:56:34.348212 systemd[1]: Finished systemd-sysext.service. Nov 1 01:56:34.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:34.357140 systemd[1]: Starting ensure-sysext.service... Nov 1 01:56:34.363991 systemd[1]: Starting systemd-tmpfiles-setup.service... Nov 1 01:56:34.370229 systemd-tmpfiles[1484]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Nov 1 01:56:34.371238 systemd-tmpfiles[1484]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 01:56:34.372694 systemd-tmpfiles[1484]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 01:56:34.373593 systemd[1]: Reloading. Nov 1 01:56:34.391424 /usr/lib/systemd/system-generators/torcx-generator[1503]: time="2025-11-01T01:56:34Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 01:56:34.391447 /usr/lib/systemd/system-generators/torcx-generator[1503]: time="2025-11-01T01:56:34Z" level=info msg="torcx already run" Nov 1 01:56:34.422682 ldconfig[1446]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 01:56:34.447703 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 01:56:34.447712 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 01:56:34.459213 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 01:56:34.506297 systemd[1]: Finished ldconfig.service. Nov 1 01:56:34.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:34.514029 systemd[1]: Finished systemd-tmpfiles-setup.service. Nov 1 01:56:34.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:34.524578 systemd[1]: Starting audit-rules.service... Nov 1 01:56:34.532079 systemd[1]: Starting clean-ca-certificates.service... Nov 1 01:56:34.541211 systemd[1]: Starting systemd-journal-catalog-update.service... Nov 1 01:56:34.540000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Nov 1 01:56:34.540000 audit[1591]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc7cd049f0 a2=420 a3=0 items=0 ppid=1574 pid=1591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:56:34.540000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Nov 1 01:56:34.541474 augenrules[1591]: No rules Nov 1 01:56:34.550314 systemd[1]: Starting systemd-resolved.service... Nov 1 01:56:34.558347 systemd[1]: Starting systemd-timesyncd.service... Nov 1 01:56:34.567025 systemd[1]: Starting systemd-update-utmp.service... Nov 1 01:56:34.574020 systemd[1]: Finished audit-rules.service. Nov 1 01:56:34.581676 systemd[1]: Finished clean-ca-certificates.service. Nov 1 01:56:34.589685 systemd[1]: Finished systemd-journal-catalog-update.service. Nov 1 01:56:34.602170 systemd[1]: Finished systemd-update-utmp.service. Nov 1 01:56:34.611438 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 01:56:34.612078 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 01:56:34.618999 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 01:56:34.625999 systemd[1]: Starting modprobe@loop.service... Nov 1 01:56:34.632403 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 01:56:34.632476 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 01:56:34.633402 systemd[1]: Starting systemd-update-done.service... Nov 1 01:56:34.638343 systemd-resolved[1598]: Positive Trust Anchors: Nov 1 01:56:34.638349 systemd-resolved[1598]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 01:56:34.638370 systemd-resolved[1598]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 01:56:34.640371 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 01:56:34.641003 systemd[1]: Started systemd-timesyncd.service. Nov 1 01:56:34.642437 systemd-resolved[1598]: Using system hostname 'ci-3510.3.8-n-0f05b56927'. Nov 1 01:56:34.649596 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 01:56:34.649695 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 01:56:34.657561 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 01:56:34.657653 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 01:56:34.665578 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 01:56:34.665697 systemd[1]: Finished modprobe@loop.service. Nov 1 01:56:34.673659 systemd[1]: Finished systemd-update-done.service. Nov 1 01:56:34.683736 systemd[1]: Reached target time-set.target. Nov 1 01:56:34.691575 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 01:56:34.692237 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 01:56:34.698800 systemd[1]: Starting modprobe@drm.service... Nov 1 01:56:34.706041 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 01:56:34.712911 systemd[1]: Starting modprobe@loop.service... Nov 1 01:56:34.720574 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 01:56:34.720646 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 01:56:34.721298 systemd[1]: Starting systemd-networkd-wait-online.service... Nov 1 01:56:34.735422 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 01:56:34.736104 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 01:56:34.736184 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 01:56:34.749332 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Nov 1 01:56:34.765662 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 01:56:34.765738 systemd[1]: Finished modprobe@drm.service. Nov 1 01:56:34.776880 systemd-networkd[1345]: enp1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:79:3d:90.network. Nov 1 01:56:34.777376 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Nov 1 01:56:34.794566 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 01:56:34.794647 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 01:56:34.805384 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Nov 1 01:56:34.813560 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 01:56:34.813649 systemd[1]: Finished modprobe@loop.service. Nov 1 01:56:34.822805 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 01:56:34.822864 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 01:56:34.823472 systemd[1]: Finished ensure-sysext.service. Nov 1 01:56:34.943436 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Nov 1 01:56:35.035000 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:56:35.035052 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:56:35.074407 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Nov 1 01:56:35.099356 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Nov 1 01:56:35.099419 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Nov 1 01:56:35.099896 systemd[1]: Started systemd-resolved.service. Nov 1 01:56:35.120103 systemd-networkd[1345]: bond0: Link UP Nov 1 01:56:35.120369 systemd-networkd[1345]: enp1s0f1np1: Link UP Nov 1 01:56:35.120588 systemd-networkd[1345]: enp1s0f1np1: Gained carrier Nov 1 01:56:35.121682 systemd-networkd[1345]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:79:3d:90.network. Nov 1 01:56:35.135483 systemd[1]: Reached target network.target. Nov 1 01:56:35.144339 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Nov 1 01:56:35.144448 kernel: bond0: active interface up! Nov 1 01:56:35.144477 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex Nov 1 01:56:35.176459 systemd[1]: Reached target nss-lookup.target. Nov 1 01:56:35.188539 systemd[1]: Reached target sysinit.target. Nov 1 01:56:35.197465 systemd[1]: Started motdgen.path. Nov 1 01:56:35.205448 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Nov 1 01:56:35.215445 systemd[1]: Started logrotate.timer. Nov 1 01:56:35.222473 systemd[1]: Started mdadm.timer. Nov 1 01:56:35.229416 systemd[1]: Started systemd-tmpfiles-clean.timer. Nov 1 01:56:35.237418 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 01:56:35.237441 systemd[1]: Reached target paths.target. Nov 1 01:56:35.244426 systemd[1]: Reached target timers.target. Nov 1 01:56:35.251572 systemd[1]: Listening on dbus.socket. Nov 1 01:56:35.259090 systemd[1]: Starting docker.socket... Nov 1 01:56:35.266184 systemd[1]: Listening on sshd.socket. Nov 1 01:56:35.273463 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 01:56:35.273669 systemd[1]: Listening on docker.socket. Nov 1 01:56:35.280395 systemd[1]: Reached target sockets.target. Nov 1 01:56:35.281490 systemd-networkd[1345]: enp1s0f0np0: Link UP Nov 1 01:56:35.281683 systemd-networkd[1345]: bond0: Gained carrier Nov 1 01:56:35.281836 systemd-networkd[1345]: enp1s0f0np0: Gained carrier Nov 1 01:56:35.281883 systemd-timesyncd[1600]: Network configuration changed, trying to establish connection. Nov 1 01:56:35.297427 systemd[1]: Reached target basic.target. Nov 1 01:56:35.306389 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Nov 1 01:56:35.306420 kernel: bond0: (slave enp1s0f1np1): invalid new link 1 on slave Nov 1 01:56:35.306620 systemd-timesyncd[1600]: Network configuration changed, trying to establish connection. Nov 1 01:56:35.306661 systemd-timesyncd[1600]: Network configuration changed, trying to establish connection. Nov 1 01:56:35.307058 systemd-networkd[1345]: enp1s0f1np1: Link DOWN Nov 1 01:56:35.307067 systemd-networkd[1345]: enp1s0f1np1: Lost carrier Nov 1 01:56:35.329600 systemd-timesyncd[1600]: Network configuration changed, trying to establish connection. Nov 1 01:56:35.329785 systemd-timesyncd[1600]: Network configuration changed, trying to establish connection. Nov 1 01:56:35.330497 systemd[1]: System is tainted: cgroupsv1 Nov 1 01:56:35.330530 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 01:56:35.330554 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 01:56:35.331178 systemd[1]: Starting containerd.service... Nov 1 01:56:35.337915 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Nov 1 01:56:35.346990 systemd[1]: Starting coreos-metadata.service... Nov 1 01:56:35.353980 systemd[1]: Starting dbus.service... Nov 1 01:56:35.359960 systemd[1]: Starting enable-oem-cloudinit.service... Nov 1 01:56:35.364672 jq[1636]: false Nov 1 01:56:35.367027 systemd[1]: Starting extend-filesystems.service... Nov 1 01:56:35.367590 coreos-metadata[1629]: Nov 01 01:56:35.367 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 1 01:56:35.370196 dbus-daemon[1635]: [system] SELinux support is enabled Nov 1 01:56:35.373421 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Nov 1 01:56:35.374112 systemd[1]: Starting motdgen.service... Nov 1 01:56:35.375148 extend-filesystems[1638]: Found loop1 Nov 1 01:56:35.375148 extend-filesystems[1638]: Found sda Nov 1 01:56:35.416379 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks Nov 1 01:56:35.381531 systemd[1]: Starting prepare-helm.service... Nov 1 01:56:35.416469 coreos-metadata[1632]: Nov 01 01:56:35.375 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 1 01:56:35.416593 extend-filesystems[1638]: Found sdb Nov 1 01:56:35.416593 extend-filesystems[1638]: Found sdb1 Nov 1 01:56:35.416593 extend-filesystems[1638]: Found sdb2 Nov 1 01:56:35.416593 extend-filesystems[1638]: Found sdb3 Nov 1 01:56:35.416593 extend-filesystems[1638]: Found usr Nov 1 01:56:35.416593 extend-filesystems[1638]: Found sdb4 Nov 1 01:56:35.416593 extend-filesystems[1638]: Found sdb6 Nov 1 01:56:35.416593 extend-filesystems[1638]: Found sdb7 Nov 1 01:56:35.416593 extend-filesystems[1638]: Found sdb9 Nov 1 01:56:35.416593 extend-filesystems[1638]: Checking size of /dev/sdb9 Nov 1 01:56:35.416593 extend-filesystems[1638]: Resized partition /dev/sdb9 Nov 1 01:56:35.588383 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Nov 1 01:56:35.588532 kernel: bond0: (slave enp1s0f1np1): speed changed to 0 on port 1 Nov 1 01:56:35.588551 kernel: bond0: (slave enp1s0f1np1): link status up again after 200 ms Nov 1 01:56:35.588568 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Nov 1 01:56:35.406237 systemd[1]: Starting ssh-key-proc-cmdline.service... Nov 1 01:56:35.588810 extend-filesystems[1654]: resize2fs 1.46.5 (30-Dec-2021) Nov 1 01:56:35.424046 systemd[1]: Starting sshd-keygen.service... Nov 1 01:56:35.600170 dbus-daemon[1635]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 1 01:56:35.442467 systemd[1]: Starting systemd-logind.service... Nov 1 01:56:35.460790 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 01:56:35.604809 update_engine[1670]: I1101 01:56:35.520537 1670 main.cc:92] Flatcar Update Engine starting Nov 1 01:56:35.604809 update_engine[1670]: I1101 01:56:35.524591 1670 update_check_scheduler.cc:74] Next update check in 11m23s Nov 1 01:56:35.461494 systemd[1]: Starting tcsd.service... Nov 1 01:56:35.605026 jq[1671]: true Nov 1 01:56:35.470998 systemd-logind[1668]: Watching system buttons on /dev/input/event3 (Power Button) Nov 1 01:56:35.471008 systemd-logind[1668]: Watching system buttons on /dev/input/event2 (Sleep Button) Nov 1 01:56:35.605300 tar[1675]: linux-amd64/LICENSE Nov 1 01:56:35.605300 tar[1675]: linux-amd64/helm Nov 1 01:56:35.471017 systemd-logind[1668]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Nov 1 01:56:35.605500 jq[1678]: true Nov 1 01:56:35.471162 systemd-logind[1668]: New seat seat0. Nov 1 01:56:35.473107 systemd[1]: Starting update-engine.service... Nov 1 01:56:35.480054 systemd[1]: Starting update-ssh-keys-after-ignition.service... Nov 1 01:56:35.525256 systemd[1]: Started dbus.service. Nov 1 01:56:35.528130 systemd-networkd[1345]: enp1s0f1np1: Link UP Nov 1 01:56:35.528132 systemd-networkd[1345]: enp1s0f1np1: Gained carrier Nov 1 01:56:35.565825 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 01:56:35.565969 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Nov 1 01:56:35.566133 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 01:56:35.566245 systemd[1]: Finished motdgen.service. Nov 1 01:56:35.574547 systemd-timesyncd[1600]: Network configuration changed, trying to establish connection. Nov 1 01:56:35.574590 systemd-timesyncd[1600]: Network configuration changed, trying to establish connection. Nov 1 01:56:35.574657 systemd-timesyncd[1600]: Network configuration changed, trying to establish connection. Nov 1 01:56:35.574764 systemd-timesyncd[1600]: Network configuration changed, trying to establish connection. Nov 1 01:56:35.580957 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 01:56:35.581086 systemd[1]: Finished ssh-key-proc-cmdline.service. Nov 1 01:56:35.605486 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Nov 1 01:56:35.605677 systemd[1]: Condition check resulted in tcsd.service being skipped. Nov 1 01:56:35.609873 systemd[1]: Started update-engine.service. Nov 1 01:56:35.612171 env[1679]: time="2025-11-01T01:56:35.612148502Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Nov 1 01:56:35.622157 env[1679]: time="2025-11-01T01:56:35.622142441Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 01:56:35.622442 systemd[1]: Started systemd-logind.service. Nov 1 01:56:35.622906 env[1679]: time="2025-11-01T01:56:35.622889441Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 01:56:35.623608 env[1679]: time="2025-11-01T01:56:35.623592196Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 01:56:35.624586 env[1679]: time="2025-11-01T01:56:35.623608257Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 01:56:35.625120 env[1679]: time="2025-11-01T01:56:35.625108977Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 01:56:35.625145 env[1679]: time="2025-11-01T01:56:35.625120566Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 01:56:35.625145 env[1679]: time="2025-11-01T01:56:35.625128693Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Nov 1 01:56:35.625145 env[1679]: time="2025-11-01T01:56:35.625134547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 01:56:35.627144 env[1679]: time="2025-11-01T01:56:35.625173946Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 01:56:35.627255 env[1679]: time="2025-11-01T01:56:35.627247286Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 01:56:35.627398 bash[1703]: Updated "/home/core/.ssh/authorized_keys" Nov 1 01:56:35.627484 env[1679]: time="2025-11-01T01:56:35.627344577Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 01:56:35.627484 env[1679]: time="2025-11-01T01:56:35.627354067Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 01:56:35.627484 env[1679]: time="2025-11-01T01:56:35.627386418Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Nov 1 01:56:35.627484 env[1679]: time="2025-11-01T01:56:35.627393399Z" level=info msg="metadata content store policy set" policy=shared Nov 1 01:56:35.630587 systemd[1]: Finished update-ssh-keys-after-ignition.service. Nov 1 01:56:35.633637 env[1679]: time="2025-11-01T01:56:35.633597591Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 01:56:35.633637 env[1679]: time="2025-11-01T01:56:35.633611867Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 01:56:35.633637 env[1679]: time="2025-11-01T01:56:35.633619936Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 01:56:35.633637 env[1679]: time="2025-11-01T01:56:35.633634833Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 01:56:35.633711 env[1679]: time="2025-11-01T01:56:35.633644804Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 01:56:35.633711 env[1679]: time="2025-11-01T01:56:35.633652731Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 01:56:35.633711 env[1679]: time="2025-11-01T01:56:35.633659567Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 01:56:35.633711 env[1679]: time="2025-11-01T01:56:35.633667620Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 01:56:35.633711 env[1679]: time="2025-11-01T01:56:35.633674956Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Nov 1 01:56:35.633711 env[1679]: time="2025-11-01T01:56:35.633682266Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 01:56:35.633711 env[1679]: time="2025-11-01T01:56:35.633688841Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 01:56:35.633711 env[1679]: time="2025-11-01T01:56:35.633695448Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 01:56:35.633835 env[1679]: time="2025-11-01T01:56:35.633744722Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 01:56:35.633835 env[1679]: time="2025-11-01T01:56:35.633791620Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 01:56:35.633980 env[1679]: time="2025-11-01T01:56:35.633970911Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 01:56:35.634003 env[1679]: time="2025-11-01T01:56:35.633986701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 01:56:35.634003 env[1679]: time="2025-11-01T01:56:35.633994730Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 01:56:35.634034 env[1679]: time="2025-11-01T01:56:35.634019891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 01:56:35.634034 env[1679]: time="2025-11-01T01:56:35.634029291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 01:56:35.634067 env[1679]: time="2025-11-01T01:56:35.634036205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 01:56:35.634067 env[1679]: time="2025-11-01T01:56:35.634042554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 01:56:35.634067 env[1679]: time="2025-11-01T01:56:35.634049195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 01:56:35.634067 env[1679]: time="2025-11-01T01:56:35.634056517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 01:56:35.634067 env[1679]: time="2025-11-01T01:56:35.634063438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 01:56:35.634146 env[1679]: time="2025-11-01T01:56:35.634069542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 01:56:35.634146 env[1679]: time="2025-11-01T01:56:35.634082072Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 01:56:35.634183 env[1679]: time="2025-11-01T01:56:35.634157909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 01:56:35.634183 env[1679]: time="2025-11-01T01:56:35.634170948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 01:56:35.634183 env[1679]: time="2025-11-01T01:56:35.634178249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 01:56:35.634231 env[1679]: time="2025-11-01T01:56:35.634184536Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 01:56:35.634231 env[1679]: time="2025-11-01T01:56:35.634192377Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Nov 1 01:56:35.634231 env[1679]: time="2025-11-01T01:56:35.634198643Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 01:56:35.634231 env[1679]: time="2025-11-01T01:56:35.634211377Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Nov 1 01:56:35.634291 env[1679]: time="2025-11-01T01:56:35.634232025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 01:56:35.634392 env[1679]: time="2025-11-01T01:56:35.634347875Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 01:56:35.634392 env[1679]: time="2025-11-01T01:56:35.634383309Z" level=info msg="Connect containerd service" Nov 1 01:56:35.635962 env[1679]: time="2025-11-01T01:56:35.634401624Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 01:56:35.635962 env[1679]: time="2025-11-01T01:56:35.634668536Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 01:56:35.635962 env[1679]: time="2025-11-01T01:56:35.634751513Z" level=info msg="Start subscribing containerd event" Nov 1 01:56:35.635962 env[1679]: time="2025-11-01T01:56:35.634912055Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 01:56:35.635962 env[1679]: time="2025-11-01T01:56:35.634951521Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 01:56:35.635962 env[1679]: time="2025-11-01T01:56:35.634967803Z" level=info msg="Start recovering state" Nov 1 01:56:35.635962 env[1679]: time="2025-11-01T01:56:35.635032944Z" level=info msg="Start event monitor" Nov 1 01:56:35.635962 env[1679]: time="2025-11-01T01:56:35.634983739Z" level=info msg="containerd successfully booted in 0.023836s" Nov 1 01:56:35.635962 env[1679]: time="2025-11-01T01:56:35.635052455Z" level=info msg="Start snapshots syncer" Nov 1 01:56:35.635962 env[1679]: time="2025-11-01T01:56:35.635066082Z" level=info msg="Start cni network conf syncer for default" Nov 1 01:56:35.635962 env[1679]: time="2025-11-01T01:56:35.635073688Z" level=info msg="Start streaming server" Nov 1 01:56:35.640436 systemd[1]: Started containerd.service. Nov 1 01:56:35.649050 systemd[1]: Started locksmithd.service. Nov 1 01:56:35.655451 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 01:56:35.655535 systemd[1]: Reached target system-config.target. Nov 1 01:56:35.663413 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 01:56:35.663500 systemd[1]: Reached target user-config.target. Nov 1 01:56:35.712094 locksmithd[1717]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 01:56:35.882602 tar[1675]: linux-amd64/README.md Nov 1 01:56:35.885826 systemd[1]: Finished prepare-helm.service. Nov 1 01:56:35.911331 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 Nov 1 01:56:35.939845 extend-filesystems[1654]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required Nov 1 01:56:35.939845 extend-filesystems[1654]: old_desc_blocks = 1, new_desc_blocks = 56 Nov 1 01:56:35.939845 extend-filesystems[1654]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. Nov 1 01:56:35.978363 extend-filesystems[1638]: Resized filesystem in /dev/sdb9 Nov 1 01:56:35.940301 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 01:56:35.940431 systemd[1]: Finished extend-filesystems.service. Nov 1 01:56:36.685646 systemd-timesyncd[1600]: Network configuration changed, trying to establish connection. Nov 1 01:56:36.749456 systemd-networkd[1345]: bond0: Gained IPv6LL Nov 1 01:56:36.749649 systemd-timesyncd[1600]: Network configuration changed, trying to establish connection. Nov 1 01:56:36.750568 systemd[1]: Finished systemd-networkd-wait-online.service. Nov 1 01:56:36.760664 systemd[1]: Reached target network-online.target. Nov 1 01:56:36.769434 systemd[1]: Starting kubelet.service... Nov 1 01:56:36.884152 sshd_keygen[1667]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 01:56:36.896235 systemd[1]: Finished sshd-keygen.service. Nov 1 01:56:36.905350 systemd[1]: Starting issuegen.service... Nov 1 01:56:36.913641 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 01:56:36.913758 systemd[1]: Finished issuegen.service. Nov 1 01:56:36.921240 systemd[1]: Starting systemd-user-sessions.service... Nov 1 01:56:36.930715 systemd[1]: Finished systemd-user-sessions.service. Nov 1 01:56:36.940187 systemd[1]: Started getty@tty1.service. Nov 1 01:56:36.948107 systemd[1]: Started serial-getty@ttyS1.service. Nov 1 01:56:36.956553 systemd[1]: Reached target getty.target. Nov 1 01:56:37.493393 systemd[1]: Started kubelet.service. Nov 1 01:56:37.898057 kubelet[1758]: E1101 01:56:37.898029 1758 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 01:56:37.899156 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 01:56:37.899243 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 01:56:38.564545 kernel: mlx5_core 0000:01:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Nov 1 01:56:41.550375 coreos-metadata[1632]: Nov 01 01:56:41.550 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Nov 1 01:56:41.551297 coreos-metadata[1629]: Nov 01 01:56:41.550 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Nov 1 01:56:41.983713 login[1752]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Nov 1 01:56:41.984118 login[1751]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 1 01:56:41.990764 systemd[1]: Created slice user-500.slice. Nov 1 01:56:41.991250 systemd[1]: Starting user-runtime-dir@500.service... Nov 1 01:56:41.992208 systemd-logind[1668]: New session 1 of user core. Nov 1 01:56:41.997032 systemd[1]: Finished user-runtime-dir@500.service. Nov 1 01:56:41.997722 systemd[1]: Starting user@500.service... Nov 1 01:56:41.999832 (systemd)[1782]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:56:42.071641 systemd[1782]: Queued start job for default target default.target. Nov 1 01:56:42.071750 systemd[1782]: Reached target paths.target. Nov 1 01:56:42.071761 systemd[1782]: Reached target sockets.target. Nov 1 01:56:42.071770 systemd[1782]: Reached target timers.target. Nov 1 01:56:42.071777 systemd[1782]: Reached target basic.target. Nov 1 01:56:42.071798 systemd[1782]: Reached target default.target. Nov 1 01:56:42.071813 systemd[1782]: Startup finished in 68ms. Nov 1 01:56:42.071876 systemd[1]: Started user@500.service. Nov 1 01:56:42.072410 systemd[1]: Started session-1.scope. Nov 1 01:56:42.550735 coreos-metadata[1629]: Nov 01 01:56:42.550 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Nov 1 01:56:42.551011 coreos-metadata[1632]: Nov 01 01:56:42.550 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Nov 1 01:56:42.989322 login[1752]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 1 01:56:42.992319 systemd-logind[1668]: New session 2 of user core. Nov 1 01:56:42.992900 systemd[1]: Started session-2.scope. Nov 1 01:56:43.107508 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:2 port 2:2 Nov 1 01:56:43.107666 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:1 port 2:2 Nov 1 01:56:43.876098 systemd[1]: Created slice system-sshd.slice. Nov 1 01:56:43.876811 systemd[1]: Started sshd@0-139.178.90.71:22-147.75.109.163:53132.service. Nov 1 01:56:43.951891 sshd[1804]: Accepted publickey for core from 147.75.109.163 port 53132 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 01:56:43.955511 sshd[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:56:43.966394 systemd-logind[1668]: New session 3 of user core. Nov 1 01:56:43.968790 systemd[1]: Started session-3.scope. Nov 1 01:56:44.021437 systemd[1]: Started sshd@1-139.178.90.71:22-147.75.109.163:53148.service. Nov 1 01:56:44.055198 sshd[1809]: Accepted publickey for core from 147.75.109.163 port 53148 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 01:56:44.055909 sshd[1809]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:56:44.058317 systemd-logind[1668]: New session 4 of user core. Nov 1 01:56:44.058814 systemd[1]: Started session-4.scope. Nov 1 01:56:44.110679 sshd[1809]: pam_unix(sshd:session): session closed for user core Nov 1 01:56:44.112239 systemd[1]: Started sshd@2-139.178.90.71:22-147.75.109.163:53160.service. Nov 1 01:56:44.112550 systemd[1]: sshd@1-139.178.90.71:22-147.75.109.163:53148.service: Deactivated successfully. Nov 1 01:56:44.113090 systemd-logind[1668]: Session 4 logged out. Waiting for processes to exit. Nov 1 01:56:44.113097 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 01:56:44.113467 systemd-logind[1668]: Removed session 4. Nov 1 01:56:44.145797 sshd[1815]: Accepted publickey for core from 147.75.109.163 port 53160 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 01:56:44.146937 sshd[1815]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:56:44.150771 systemd-logind[1668]: New session 5 of user core. Nov 1 01:56:44.151706 systemd[1]: Started session-5.scope. Nov 1 01:56:44.220872 sshd[1815]: pam_unix(sshd:session): session closed for user core Nov 1 01:56:44.226720 systemd[1]: sshd@2-139.178.90.71:22-147.75.109.163:53160.service: Deactivated successfully. Nov 1 01:56:44.229308 systemd-logind[1668]: Session 5 logged out. Waiting for processes to exit. Nov 1 01:56:44.229358 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 01:56:44.231815 systemd-logind[1668]: Removed session 5. Nov 1 01:56:44.684726 coreos-metadata[1632]: Nov 01 01:56:44.684 INFO Fetch successful Nov 1 01:56:44.765755 systemd[1]: Finished coreos-metadata.service. Nov 1 01:56:44.766616 systemd[1]: Started packet-phone-home.service. Nov 1 01:56:44.778533 curl[1828]: % Total % Received % Xferd Average Speed Time Time Time Current Nov 1 01:56:44.779046 curl[1828]: Dload Upload Total Spent Left Speed Nov 1 01:56:44.793546 coreos-metadata[1629]: Nov 01 01:56:44.793 INFO Fetch successful Nov 1 01:56:44.871302 unknown[1629]: wrote ssh authorized keys file for user: core Nov 1 01:56:44.884708 update-ssh-keys[1830]: Updated "/home/core/.ssh/authorized_keys" Nov 1 01:56:44.884968 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Nov 1 01:56:44.885151 systemd[1]: Reached target multi-user.target. Nov 1 01:56:44.885919 systemd[1]: Starting systemd-update-utmp-runlevel.service... Nov 1 01:56:44.890027 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Nov 1 01:56:44.890139 systemd[1]: Finished systemd-update-utmp-runlevel.service. Nov 1 01:56:44.890254 systemd[1]: Startup finished in 26.313s (kernel) + 16.608s (userspace) = 42.921s. Nov 1 01:56:45.144908 curl[1828]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Nov 1 01:56:45.147378 systemd[1]: packet-phone-home.service: Deactivated successfully. Nov 1 01:56:47.998523 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 01:56:47.999046 systemd[1]: Stopped kubelet.service. Nov 1 01:56:48.002279 systemd[1]: Starting kubelet.service... Nov 1 01:56:48.292719 systemd[1]: Started kubelet.service. Nov 1 01:56:48.321334 kubelet[1844]: E1101 01:56:48.321310 1844 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 01:56:48.323334 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 01:56:48.323469 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 01:56:54.228284 systemd[1]: Started sshd@3-139.178.90.71:22-147.75.109.163:43426.service. Nov 1 01:56:54.261079 sshd[1863]: Accepted publickey for core from 147.75.109.163 port 43426 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 01:56:54.262044 sshd[1863]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:56:54.265235 systemd-logind[1668]: New session 6 of user core. Nov 1 01:56:54.265977 systemd[1]: Started session-6.scope. Nov 1 01:56:54.321456 sshd[1863]: pam_unix(sshd:session): session closed for user core Nov 1 01:56:54.322936 systemd[1]: Started sshd@4-139.178.90.71:22-147.75.109.163:43442.service. Nov 1 01:56:54.323236 systemd[1]: sshd@3-139.178.90.71:22-147.75.109.163:43426.service: Deactivated successfully. Nov 1 01:56:54.323740 systemd-logind[1668]: Session 6 logged out. Waiting for processes to exit. Nov 1 01:56:54.323760 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 01:56:54.324191 systemd-logind[1668]: Removed session 6. Nov 1 01:56:54.355760 sshd[1869]: Accepted publickey for core from 147.75.109.163 port 43442 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 01:56:54.356648 sshd[1869]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:56:54.359825 systemd-logind[1668]: New session 7 of user core. Nov 1 01:56:54.360477 systemd[1]: Started session-7.scope. Nov 1 01:56:54.414894 sshd[1869]: pam_unix(sshd:session): session closed for user core Nov 1 01:56:54.421191 systemd[1]: Started sshd@5-139.178.90.71:22-147.75.109.163:43450.service. Nov 1 01:56:54.422912 systemd[1]: sshd@4-139.178.90.71:22-147.75.109.163:43442.service: Deactivated successfully. Nov 1 01:56:54.425482 systemd-logind[1668]: Session 7 logged out. Waiting for processes to exit. Nov 1 01:56:54.425568 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 01:56:54.426379 systemd-logind[1668]: Removed session 7. Nov 1 01:56:54.458598 sshd[1875]: Accepted publickey for core from 147.75.109.163 port 43450 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 01:56:54.461918 sshd[1875]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:56:54.472681 systemd-logind[1668]: New session 8 of user core. Nov 1 01:56:54.475140 systemd[1]: Started session-8.scope. Nov 1 01:56:54.558175 sshd[1875]: pam_unix(sshd:session): session closed for user core Nov 1 01:56:54.564460 systemd[1]: Started sshd@6-139.178.90.71:22-147.75.109.163:43456.service. Nov 1 01:56:54.566077 systemd[1]: sshd@5-139.178.90.71:22-147.75.109.163:43450.service: Deactivated successfully. Nov 1 01:56:54.568618 systemd-logind[1668]: Session 8 logged out. Waiting for processes to exit. Nov 1 01:56:54.568786 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 01:56:54.571183 systemd-logind[1668]: Removed session 8. Nov 1 01:56:54.630187 sshd[1883]: Accepted publickey for core from 147.75.109.163 port 43456 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 01:56:54.632575 sshd[1883]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:56:54.640137 systemd-logind[1668]: New session 9 of user core. Nov 1 01:56:54.641732 systemd[1]: Started session-9.scope. Nov 1 01:56:54.735661 sudo[1888]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 1 01:56:54.736372 sudo[1888]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 01:56:54.756817 dbus-daemon[1635]: Н\xbbvkU: received setenforce notice (enforcing=1985872704) Nov 1 01:56:54.761961 sudo[1888]: pam_unix(sudo:session): session closed for user root Nov 1 01:56:54.767463 sshd[1883]: pam_unix(sshd:session): session closed for user core Nov 1 01:56:54.773988 systemd[1]: Started sshd@7-139.178.90.71:22-147.75.109.163:43464.service. Nov 1 01:56:54.775700 systemd[1]: sshd@6-139.178.90.71:22-147.75.109.163:43456.service: Deactivated successfully. Nov 1 01:56:54.778312 systemd-logind[1668]: Session 9 logged out. Waiting for processes to exit. Nov 1 01:56:54.778380 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 01:56:54.779010 systemd-logind[1668]: Removed session 9. Nov 1 01:56:54.811228 sshd[1890]: Accepted publickey for core from 147.75.109.163 port 43464 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 01:56:54.814510 sshd[1890]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:56:54.825038 systemd-logind[1668]: New session 10 of user core. Nov 1 01:56:54.827364 systemd[1]: Started session-10.scope. Nov 1 01:56:54.906168 sudo[1897]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 1 01:56:54.906887 sudo[1897]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 01:56:54.914494 sudo[1897]: pam_unix(sudo:session): session closed for user root Nov 1 01:56:54.927927 sudo[1896]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 1 01:56:54.928619 sudo[1896]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 01:56:54.954932 systemd[1]: Stopping audit-rules.service... Nov 1 01:56:54.957000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Nov 1 01:56:54.958583 auditctl[1900]: No rules Nov 1 01:56:54.959498 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 01:56:54.960110 systemd[1]: Stopped audit-rules.service. Nov 1 01:56:54.964076 kernel: kauditd_printk_skb: 88 callbacks suppressed Nov 1 01:56:54.964217 kernel: audit: type=1305 audit(1761962214.957:139): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Nov 1 01:56:54.964409 systemd[1]: Starting audit-rules.service... Nov 1 01:56:54.957000 audit[1900]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff18d2b630 a2=420 a3=0 items=0 ppid=1 pid=1900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:56:55.005012 augenrules[1918]: No rules Nov 1 01:56:55.005699 systemd[1]: Finished audit-rules.service. Nov 1 01:56:55.006568 sudo[1896]: pam_unix(sudo:session): session closed for user root Nov 1 01:56:55.008121 sshd[1890]: pam_unix(sshd:session): session closed for user core Nov 1 01:56:55.010539 systemd[1]: Started sshd@8-139.178.90.71:22-147.75.109.163:43476.service. Nov 1 01:56:55.011099 systemd[1]: sshd@7-139.178.90.71:22-147.75.109.163:43464.service: Deactivated successfully. Nov 1 01:56:55.011255 kernel: audit: type=1300 audit(1761962214.957:139): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff18d2b630 a2=420 a3=0 items=0 ppid=1 pid=1900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:56:55.011291 kernel: audit: type=1327 audit(1761962214.957:139): proctitle=2F7362696E2F617564697463746C002D44 Nov 1 01:56:54.957000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Nov 1 01:56:55.011916 systemd-logind[1668]: Session 10 logged out. Waiting for processes to exit. Nov 1 01:56:55.012003 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 01:56:55.013017 systemd-logind[1668]: Removed session 10. Nov 1 01:56:55.020816 kernel: audit: type=1131 audit(1761962214.959:140): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:54.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:55.043263 kernel: audit: type=1130 audit(1761962215.005:141): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:55.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:55.065715 kernel: audit: type=1106 audit(1761962215.006:142): pid=1896 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 01:56:55.006000 audit[1896]: USER_END pid=1896 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 01:56:55.069795 sshd[1924]: Accepted publickey for core from 147.75.109.163 port 43476 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 01:56:55.070620 sshd[1924]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:56:55.072986 systemd-logind[1668]: New session 11 of user core. Nov 1 01:56:55.073400 systemd[1]: Started session-11.scope. Nov 1 01:56:55.091775 kernel: audit: type=1104 audit(1761962215.006:143): pid=1896 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 01:56:55.006000 audit[1896]: CRED_DISP pid=1896 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 01:56:55.115411 kernel: audit: type=1106 audit(1761962215.008:144): pid=1890 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:56:55.008000 audit[1890]: USER_END pid=1890 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:56:55.120814 sudo[1929]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 01:56:55.120940 sudo[1929]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 01:56:55.140096 systemd[1]: Starting docker.service... Nov 1 01:56:55.008000 audit[1890]: CRED_DISP pid=1890 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:56:55.157042 env[1944]: time="2025-11-01T01:56:55.157014674Z" level=info msg="Starting up" Nov 1 01:56:55.157680 env[1944]: time="2025-11-01T01:56:55.157669487Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 01:56:55.157680 env[1944]: time="2025-11-01T01:56:55.157678744Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 01:56:55.157733 env[1944]: time="2025-11-01T01:56:55.157691828Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 01:56:55.157733 env[1944]: time="2025-11-01T01:56:55.157698287Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 01:56:55.159074 env[1944]: time="2025-11-01T01:56:55.159038593Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 01:56:55.159108 env[1944]: time="2025-11-01T01:56:55.159072450Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 01:56:55.159128 env[1944]: time="2025-11-01T01:56:55.159098063Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 01:56:55.159128 env[1944]: time="2025-11-01T01:56:55.159115690Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 01:56:55.173774 kernel: audit: type=1104 audit(1761962215.008:145): pid=1890 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:56:55.173804 kernel: audit: type=1130 audit(1761962215.010:146): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.90.71:22-147.75.109.163:43476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:55.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.90.71:22-147.75.109.163:43476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:55.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.90.71:22-147.75.109.163:43464 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:55.069000 audit[1924]: USER_ACCT pid=1924 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:56:55.070000 audit[1924]: CRED_ACQ pid=1924 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:56:55.070000 audit[1924]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcf6217cd0 a2=3 a3=0 items=0 ppid=1 pid=1924 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:56:55.070000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 01:56:55.075000 audit[1924]: USER_START pid=1924 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:56:55.075000 audit[1928]: CRED_ACQ pid=1928 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:56:55.120000 audit[1929]: USER_ACCT pid=1929 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 01:56:55.120000 audit[1929]: CRED_REFR pid=1929 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 01:56:55.121000 audit[1929]: USER_START pid=1929 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 01:56:55.350221 env[1944]: time="2025-11-01T01:56:55.350010252Z" level=warning msg="Your kernel does not support cgroup blkio weight" Nov 1 01:56:55.350221 env[1944]: time="2025-11-01T01:56:55.350070118Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Nov 1 01:56:55.350906 env[1944]: time="2025-11-01T01:56:55.350677363Z" level=info msg="Loading containers: start." Nov 1 01:56:55.403000 audit[1992]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1992 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:56:55.403000 audit[1992]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffcd4cbc1c0 a2=0 a3=7ffcd4cbc1ac items=0 ppid=1944 pid=1992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:56:55.403000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Nov 1 01:56:55.404000 audit[1994]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1994 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:56:55.404000 audit[1994]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffca624ec10 a2=0 a3=7ffca624ebfc items=0 ppid=1944 pid=1994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:56:55.404000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Nov 1 01:56:55.405000 audit[1996]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1996 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:56:55.405000 audit[1996]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffe9350d40 a2=0 a3=7fffe9350d2c items=0 ppid=1944 pid=1996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:56:55.405000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Nov 1 01:56:55.406000 audit[1998]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1998 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:56:55.406000 audit[1998]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd259ef5c0 a2=0 a3=7ffd259ef5ac items=0 ppid=1944 pid=1998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:56:55.406000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Nov 1 01:56:55.407000 audit[2000]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=2000 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:56:55.407000 audit[2000]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffddd59b0d0 a2=0 a3=7ffddd59b0bc items=0 ppid=1944 pid=2000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:56:55.407000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Nov 1 01:56:55.459000 audit[2005]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=2005 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:56:55.459000 audit[2005]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdb3499960 a2=0 a3=7ffdb349994c items=0 ppid=1944 pid=2005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:56:55.459000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Nov 1 01:56:55.466000 audit[2007]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2007 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:56:55.466000 audit[2007]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffce60f6940 a2=0 a3=7ffce60f692c items=0 ppid=1944 pid=2007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:56:55.466000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Nov 1 01:56:55.470000 audit[2009]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=2009 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:56:55.470000 audit[2009]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffeda392fb0 a2=0 a3=7ffeda392f9c items=0 ppid=1944 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:56:55.470000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Nov 1 01:56:55.475000 audit[2011]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=2011 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:56:55.475000 audit[2011]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffd05216d10 a2=0 a3=7ffd05216cfc items=0 ppid=1944 pid=2011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:56:55.475000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Nov 1 01:56:55.489000 audit[2015]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=2015 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:56:55.489000 audit[2015]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fffd3d1aed0 a2=0 a3=7fffd3d1aebc items=0 ppid=1944 pid=2015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:56:55.489000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Nov 1 01:56:55.507000 audit[2016]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2016 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:56:55.507000 audit[2016]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff6c13df90 a2=0 a3=7fff6c13df7c items=0 ppid=1944 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:56:55.507000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Nov 1 01:56:55.540389 kernel: Initializing XFRM netlink socket Nov 1 01:56:55.617074 env[1944]: time="2025-11-01T01:56:55.617024027Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Nov 1 01:56:55.618016 systemd-timesyncd[1600]: Network configuration changed, trying to establish connection. Nov 1 01:56:55.630000 audit[2024]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2024 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:56:55.630000 audit[2024]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffcfdad4470 a2=0 a3=7ffcfdad445c items=0 ppid=1944 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:56:55.630000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Nov 1 01:56:55.652000 audit[2027]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=2027 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:56:55.652000 audit[2027]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fff262df160 a2=0 a3=7fff262df14c items=0 ppid=1944 pid=2027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:56:55.652000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Nov 1 01:56:55.654000 audit[2030]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:56:55.654000 audit[2030]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe1520b460 a2=0 a3=7ffe1520b44c items=0 ppid=1944 pid=2030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:56:55.654000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Nov 1 01:56:55.655000 audit[2032]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=2032 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:56:55.655000 audit[2032]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe633b9120 a2=0 a3=7ffe633b910c items=0 ppid=1944 pid=2032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:56:55.655000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Nov 1 01:56:55.656000 audit[2034]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=2034 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:56:55.656000 audit[2034]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffc6a501460 a2=0 a3=7ffc6a50144c items=0 ppid=1944 pid=2034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:56:55.656000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Nov 1 01:56:55.657000 audit[2036]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=2036 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:56:55.657000 audit[2036]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffd11e57b90 a2=0 a3=7ffd11e57b7c items=0 ppid=1944 pid=2036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:56:55.657000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Nov 1 01:56:55.658000 audit[2038]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=2038 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:56:55.658000 audit[2038]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffc51dfd610 a2=0 a3=7ffc51dfd5fc items=0 ppid=1944 pid=2038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:56:55.658000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Nov 1 01:56:55.663000 audit[2041]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=2041 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:56:55.663000 audit[2041]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffd2892acf0 a2=0 a3=7ffd2892acdc items=0 ppid=1944 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:56:55.663000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Nov 1 01:56:55.664000 audit[2043]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=2043 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:56:55.664000 audit[2043]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffd299e8f60 a2=0 a3=7ffd299e8f4c items=0 ppid=1944 pid=2043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:56:55.664000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Nov 1 01:56:55.666000 audit[2045]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=2045 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:56:55.666000 audit[2045]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fff1fcc6210 a2=0 a3=7fff1fcc61fc items=0 ppid=1944 pid=2045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:56:55.666000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Nov 1 01:56:55.667000 audit[2047]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=2047 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:56:55.667000 audit[2047]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc84280f90 a2=0 a3=7ffc84280f7c items=0 ppid=1944 pid=2047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:56:55.667000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Nov 1 01:56:55.667777 systemd-networkd[1345]: docker0: Link UP Nov 1 01:56:55.671000 audit[2051]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=2051 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:56:55.671000 audit[2051]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffcf72af80 a2=0 a3=7fffcf72af6c items=0 ppid=1944 pid=2051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:56:55.671000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Nov 1 01:56:55.689000 audit[2052]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=2052 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:56:55.689000 audit[2052]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc4b63c280 a2=0 a3=7ffc4b63c26c items=0 ppid=1944 pid=2052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:56:55.689000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Nov 1 01:56:55.690547 env[1944]: time="2025-11-01T01:56:55.690496727Z" level=info msg="Loading containers: done." Nov 1 01:56:55.700261 env[1944]: time="2025-11-01T01:56:55.700225861Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 01:56:55.700481 env[1944]: time="2025-11-01T01:56:55.700457637Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Nov 1 01:56:55.700601 env[1944]: time="2025-11-01T01:56:55.700579729Z" level=info msg="Daemon has completed initialization" Nov 1 01:56:55.713523 systemd[1]: Started docker.service. Nov 1 01:56:55.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:55.722615 env[1944]: time="2025-11-01T01:56:55.722522549Z" level=info msg="API listen on /run/docker.sock" Nov 1 01:56:56.019990 systemd-timesyncd[1600]: Contacted time server [2604:a880:800:a1::ec9:5001]:123 (2.flatcar.pool.ntp.org). Nov 1 01:56:56.020039 systemd-timesyncd[1600]: Initial clock synchronization to Sat 2025-11-01 01:56:55.703324 UTC. Nov 1 01:56:56.694960 env[1679]: time="2025-11-01T01:56:56.694895249Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 1 01:56:57.183540 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2767823199.mount: Deactivated successfully. Nov 1 01:56:58.300212 env[1679]: time="2025-11-01T01:56:58.300157545Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:56:58.300847 env[1679]: time="2025-11-01T01:56:58.300794253Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:56:58.301833 env[1679]: time="2025-11-01T01:56:58.301792401Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:56:58.302775 env[1679]: time="2025-11-01T01:56:58.302736652Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:56:58.303168 env[1679]: time="2025-11-01T01:56:58.303119329Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 1 01:56:58.303549 env[1679]: time="2025-11-01T01:56:58.303537643Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 1 01:56:58.498021 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 01:56:58.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:58.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:58.498550 systemd[1]: Stopped kubelet.service. Nov 1 01:56:58.501732 systemd[1]: Starting kubelet.service... Nov 1 01:56:58.756921 systemd[1]: Started kubelet.service. Nov 1 01:56:58.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:56:58.779094 kubelet[2111]: E1101 01:56:58.779072 2111 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 01:56:58.780044 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 01:56:58.780147 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 01:56:58.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Nov 1 01:56:59.711756 env[1679]: time="2025-11-01T01:56:59.711694535Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:56:59.712423 env[1679]: time="2025-11-01T01:56:59.712354773Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:56:59.713440 env[1679]: time="2025-11-01T01:56:59.713402912Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:56:59.714505 env[1679]: time="2025-11-01T01:56:59.714456432Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:56:59.714919 env[1679]: time="2025-11-01T01:56:59.714869599Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 1 01:56:59.715220 env[1679]: time="2025-11-01T01:56:59.715208998Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 1 01:57:00.858100 env[1679]: time="2025-11-01T01:57:00.858045740Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:00.858642 env[1679]: time="2025-11-01T01:57:00.858604035Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:00.860422 env[1679]: time="2025-11-01T01:57:00.860346680Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:00.861581 env[1679]: time="2025-11-01T01:57:00.861567978Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:00.862085 env[1679]: time="2025-11-01T01:57:00.862072423Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 1 01:57:00.862389 env[1679]: time="2025-11-01T01:57:00.862377156Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 1 01:57:01.884750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount234318673.mount: Deactivated successfully. Nov 1 01:57:02.278566 env[1679]: time="2025-11-01T01:57:02.278542965Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:02.279013 env[1679]: time="2025-11-01T01:57:02.279002875Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:02.279634 env[1679]: time="2025-11-01T01:57:02.279620257Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:02.280222 env[1679]: time="2025-11-01T01:57:02.280211866Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:02.280526 env[1679]: time="2025-11-01T01:57:02.280509637Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 1 01:57:02.280965 env[1679]: time="2025-11-01T01:57:02.280924899Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 1 01:57:02.866572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3102737805.mount: Deactivated successfully. Nov 1 01:57:03.650423 env[1679]: time="2025-11-01T01:57:03.650363634Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:03.651016 env[1679]: time="2025-11-01T01:57:03.650964930Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:03.652175 env[1679]: time="2025-11-01T01:57:03.652134593Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:03.653147 env[1679]: time="2025-11-01T01:57:03.653103376Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:03.653679 env[1679]: time="2025-11-01T01:57:03.653641523Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 1 01:57:03.654072 env[1679]: time="2025-11-01T01:57:03.654041719Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 1 01:57:04.208680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4136757057.mount: Deactivated successfully. Nov 1 01:57:04.209734 env[1679]: time="2025-11-01T01:57:04.209685415Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:04.210738 env[1679]: time="2025-11-01T01:57:04.210693041Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:04.211333 env[1679]: time="2025-11-01T01:57:04.211291010Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:04.211982 env[1679]: time="2025-11-01T01:57:04.211942720Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:04.212283 env[1679]: time="2025-11-01T01:57:04.212237733Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 1 01:57:04.212531 env[1679]: time="2025-11-01T01:57:04.212490060Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 1 01:57:04.787993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4133982389.mount: Deactivated successfully. Nov 1 01:57:06.420098 env[1679]: time="2025-11-01T01:57:06.420043981Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:06.420769 env[1679]: time="2025-11-01T01:57:06.420733658Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:06.421827 env[1679]: time="2025-11-01T01:57:06.421786713Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:06.422805 env[1679]: time="2025-11-01T01:57:06.422772579Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:06.423295 env[1679]: time="2025-11-01T01:57:06.423261718Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 1 01:57:08.467901 systemd[1]: Stopped kubelet.service. Nov 1 01:57:08.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:57:08.469201 systemd[1]: Starting kubelet.service... Nov 1 01:57:08.473469 kernel: kauditd_printk_skb: 88 callbacks suppressed Nov 1 01:57:08.473512 kernel: audit: type=1130 audit(1761962228.466:185): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:57:08.484214 systemd[1]: Reloading. Nov 1 01:57:08.509128 /usr/lib/systemd/system-generators/torcx-generator[2201]: time="2025-11-01T01:57:08Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 01:57:08.509143 /usr/lib/systemd/system-generators/torcx-generator[2201]: time="2025-11-01T01:57:08Z" level=info msg="torcx already run" Nov 1 01:57:08.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:57:08.512384 kernel: audit: type=1131 audit(1761962228.466:186): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:57:08.592291 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 01:57:08.592299 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 01:57:08.603967 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 01:57:08.668305 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 1 01:57:08.668356 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 1 01:57:08.668487 systemd[1]: Stopped kubelet.service. Nov 1 01:57:08.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Nov 1 01:57:08.669377 systemd[1]: Starting kubelet.service... Nov 1 01:57:08.725582 kernel: audit: type=1130 audit(1761962228.667:187): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Nov 1 01:57:08.874620 systemd[1]: Started kubelet.service. Nov 1 01:57:08.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:57:08.929140 kubelet[2273]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 01:57:08.929140 kubelet[2273]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 01:57:08.929140 kubelet[2273]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 01:57:08.929394 kubelet[2273]: I1101 01:57:08.929172 2273 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 01:57:08.932430 kernel: audit: type=1130 audit(1761962228.873:188): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:57:09.144047 kubelet[2273]: I1101 01:57:09.143979 2273 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 01:57:09.144047 kubelet[2273]: I1101 01:57:09.143991 2273 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 01:57:09.144176 kubelet[2273]: I1101 01:57:09.144133 2273 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 01:57:09.168851 kubelet[2273]: E1101 01:57:09.168807 2273 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.90.71:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.90.71:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:57:09.172938 kubelet[2273]: I1101 01:57:09.172900 2273 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 01:57:09.176830 kubelet[2273]: E1101 01:57:09.176781 2273 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 01:57:09.176830 kubelet[2273]: I1101 01:57:09.176794 2273 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 01:57:09.196196 kubelet[2273]: I1101 01:57:09.196152 2273 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 01:57:09.197491 kubelet[2273]: I1101 01:57:09.197446 2273 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 01:57:09.197604 kubelet[2273]: I1101 01:57:09.197465 2273 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-0f05b56927","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 1 01:57:09.197604 kubelet[2273]: I1101 01:57:09.197580 2273 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 01:57:09.197604 kubelet[2273]: I1101 01:57:09.197589 2273 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 01:57:09.197748 kubelet[2273]: I1101 01:57:09.197663 2273 state_mem.go:36] "Initialized new in-memory state store" Nov 1 01:57:09.200970 kubelet[2273]: I1101 01:57:09.200931 2273 kubelet.go:446] "Attempting to sync node with API server" Nov 1 01:57:09.200970 kubelet[2273]: I1101 01:57:09.200945 2273 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 01:57:09.200970 kubelet[2273]: I1101 01:57:09.200959 2273 kubelet.go:352] "Adding apiserver pod source" Nov 1 01:57:09.200970 kubelet[2273]: I1101 01:57:09.200966 2273 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 01:57:09.207726 kubelet[2273]: W1101 01:57:09.207665 2273 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.90.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-0f05b56927&limit=500&resourceVersion=0": dial tcp 139.178.90.71:6443: connect: connection refused Nov 1 01:57:09.207726 kubelet[2273]: E1101 01:57:09.207708 2273 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.90.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-0f05b56927&limit=500&resourceVersion=0\": dial tcp 139.178.90.71:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:57:09.208885 kubelet[2273]: W1101 01:57:09.208833 2273 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.90.71:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.90.71:6443: connect: connection refused Nov 1 01:57:09.208885 kubelet[2273]: E1101 01:57:09.208869 2273 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.90.71:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.90.71:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:57:09.212614 kubelet[2273]: I1101 01:57:09.212570 2273 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 01:57:09.212925 kubelet[2273]: I1101 01:57:09.212890 2273 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 01:57:09.213727 kubelet[2273]: W1101 01:57:09.213689 2273 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 01:57:09.218867 kubelet[2273]: I1101 01:57:09.218826 2273 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 01:57:09.218867 kubelet[2273]: I1101 01:57:09.218855 2273 server.go:1287] "Started kubelet" Nov 1 01:57:09.219085 kubelet[2273]: I1101 01:57:09.219025 2273 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 01:57:09.220000 audit[2273]: AVC avc: denied { mac_admin } for pid=2273 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:09.220512 kubelet[2273]: I1101 01:57:09.220382 2273 kubelet.go:1507] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins_registry: invalid argument" Nov 1 01:57:09.220512 kubelet[2273]: I1101 01:57:09.220419 2273 kubelet.go:1511] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins: invalid argument" Nov 1 01:57:09.220512 kubelet[2273]: I1101 01:57:09.220483 2273 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 01:57:09.220656 kubelet[2273]: I1101 01:57:09.220504 2273 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 01:57:09.220826 kubelet[2273]: E1101 01:57:09.220788 2273 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-0f05b56927\" not found" Nov 1 01:57:09.220891 kubelet[2273]: I1101 01:57:09.220853 2273 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 01:57:09.250778 kubelet[2273]: I1101 01:57:09.250757 2273 server.go:479] "Adding debug handlers to kubelet server" Nov 1 01:57:09.250851 kubelet[2273]: I1101 01:57:09.250814 2273 reconciler.go:26] "Reconciler: start to sync state" Nov 1 01:57:09.251079 kubelet[2273]: W1101 01:57:09.251037 2273 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.90.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.90.71:6443: connect: connection refused Nov 1 01:57:09.251158 kubelet[2273]: I1101 01:57:09.251090 2273 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 01:57:09.251158 kubelet[2273]: E1101 01:57:09.251094 2273 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.90.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.90.71:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:57:09.251158 kubelet[2273]: I1101 01:57:09.250500 2273 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 01:57:09.251292 kubelet[2273]: E1101 01:57:09.250847 2273 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.90.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-0f05b56927?timeout=10s\": dial tcp 139.178.90.71:6443: connect: connection refused" interval="200ms" Nov 1 01:57:09.251396 kubelet[2273]: I1101 01:57:09.251380 2273 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 01:57:09.253036 kubelet[2273]: I1101 01:57:09.253020 2273 factory.go:221] Registration of the containerd container factory successfully Nov 1 01:57:09.253036 kubelet[2273]: I1101 01:57:09.253035 2273 factory.go:221] Registration of the systemd container factory successfully Nov 1 01:57:09.253138 kubelet[2273]: I1101 01:57:09.253122 2273 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 01:57:09.255501 kubelet[2273]: E1101 01:57:09.255490 2273 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 01:57:09.257190 kubelet[2273]: E1101 01:57:09.256027 2273 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.90.71:6443/api/v1/namespaces/default/events\": dial tcp 139.178.90.71:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-n-0f05b56927.1873bf4fec3e86d5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-0f05b56927,UID:ci-3510.3.8-n-0f05b56927,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-0f05b56927,},FirstTimestamp:2025-11-01 01:57:09.218838229 +0000 UTC m=+0.341309304,LastTimestamp:2025-11-01 01:57:09.218838229 +0000 UTC m=+0.341309304,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-0f05b56927,}" Nov 1 01:57:09.220000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 01:57:09.309601 kernel: audit: type=1400 audit(1761962229.220:189): avc: denied { mac_admin } for pid=2273 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:09.309633 kernel: audit: type=1401 audit(1761962229.220:189): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 01:57:09.309647 kernel: audit: type=1300 audit(1761962229.220:189): arch=c000003e syscall=188 success=no exit=-22 a0=c00072d110 a1=c00054d3f8 a2=c00072d0e0 a3=25 items=0 ppid=1 pid=2273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:09.220000 audit[2273]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00072d110 a1=c00054d3f8 a2=c00072d0e0 a3=25 items=0 ppid=1 pid=2273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:09.321850 kubelet[2273]: E1101 01:57:09.321811 2273 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-0f05b56927\" not found" Nov 1 01:57:09.399592 kubelet[2273]: I1101 01:57:09.399559 2273 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 01:57:09.399592 kubelet[2273]: I1101 01:57:09.399566 2273 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 01:57:09.399592 kubelet[2273]: I1101 01:57:09.399578 2273 state_mem.go:36] "Initialized new in-memory state store" Nov 1 01:57:09.400396 kernel: audit: type=1327 audit(1761962229.220:189): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 01:57:09.220000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 01:57:09.400484 kubelet[2273]: I1101 01:57:09.400476 2273 policy_none.go:49] "None policy: Start" Nov 1 01:57:09.400522 kubelet[2273]: I1101 01:57:09.400486 2273 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 01:57:09.400522 kubelet[2273]: I1101 01:57:09.400495 2273 state_mem.go:35] "Initializing new in-memory state store" Nov 1 01:57:09.422274 kubelet[2273]: E1101 01:57:09.422264 2273 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-0f05b56927\" not found" Nov 1 01:57:09.451614 kubelet[2273]: E1101 01:57:09.451602 2273 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.90.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-0f05b56927?timeout=10s\": dial tcp 139.178.90.71:6443: connect: connection refused" interval="400ms" Nov 1 01:57:09.220000 audit[2273]: AVC avc: denied { mac_admin } for pid=2273 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:09.523184 kubelet[2273]: E1101 01:57:09.523175 2273 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-0f05b56927\" not found" Nov 1 01:57:09.552099 kernel: audit: type=1400 audit(1761962229.220:190): avc: denied { mac_admin } for pid=2273 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:09.552121 kernel: audit: type=1401 audit(1761962229.220:190): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 01:57:09.220000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 01:57:09.220000 audit[2273]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00053f460 a1=c00054d410 a2=c00072d1a0 a3=25 items=0 ppid=1 pid=2273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:09.220000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 01:57:09.251000 audit[2299]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2299 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:57:09.251000 audit[2299]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffdd6220a70 a2=0 a3=7ffdd6220a5c items=0 ppid=2273 pid=2299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:09.251000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Nov 1 01:57:09.251000 audit[2300]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=2300 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:57:09.251000 audit[2300]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe30fd9050 a2=0 a3=7ffe30fd903c items=0 ppid=2273 pid=2300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:09.251000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Nov 1 01:57:09.255000 audit[2302]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=2302 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:57:09.255000 audit[2302]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdf6ae7ea0 a2=0 a3=7ffdf6ae7e8c items=0 ppid=2273 pid=2302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:09.255000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Nov 1 01:57:09.256000 audit[2304]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=2304 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:57:09.256000 audit[2304]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffddc94c300 a2=0 a3=7ffddc94c2ec items=0 ppid=2273 pid=2304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:09.256000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Nov 1 01:57:09.585000 audit[2307]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2307 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:57:09.585000 audit[2307]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffe4a0bd8e0 a2=0 a3=7ffe4a0bd8cc items=0 ppid=2273 pid=2307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:09.585000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Nov 1 01:57:09.585662 kubelet[2273]: I1101 01:57:09.585602 2273 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 01:57:09.585711 kubelet[2273]: I1101 01:57:09.585702 2273 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 01:57:09.585000 audit[2273]: AVC avc: denied { mac_admin } for pid=2273 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:09.585000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 01:57:09.585000 audit[2273]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0011756b0 a1=c000fffea8 a2=c001175680 a3=25 items=0 ppid=1 pid=2273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:09.585000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 01:57:09.585889 kubelet[2273]: I1101 01:57:09.585738 2273 server.go:94] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/device-plugins/: invalid argument" Nov 1 01:57:09.585889 kubelet[2273]: I1101 01:57:09.585818 2273 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 01:57:09.585889 kubelet[2273]: I1101 01:57:09.585827 2273 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 01:57:09.585962 kubelet[2273]: I1101 01:57:09.585945 2273 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 01:57:09.585000 audit[2308]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=2308 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:57:09.585000 audit[2308]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd28fff380 a2=0 a3=7ffd28fff36c items=0 ppid=2273 pid=2308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:09.585000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Nov 1 01:57:09.585000 audit[2309]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=2309 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:57:09.585000 audit[2309]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe89bb1860 a2=0 a3=7ffe89bb184c items=0 ppid=2273 pid=2309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:09.585000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Nov 1 01:57:09.586319 kubelet[2273]: I1101 01:57:09.586216 2273 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 01:57:09.586319 kubelet[2273]: I1101 01:57:09.586230 2273 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 01:57:09.586319 kubelet[2273]: E1101 01:57:09.586230 2273 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 01:57:09.586319 kubelet[2273]: I1101 01:57:09.586246 2273 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 01:57:09.586319 kubelet[2273]: I1101 01:57:09.586252 2273 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 01:57:09.586319 kubelet[2273]: E1101 01:57:09.586258 2273 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.8-n-0f05b56927\" not found" Nov 1 01:57:09.586319 kubelet[2273]: E1101 01:57:09.586282 2273 kubelet.go:2406] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Nov 1 01:57:09.586000 audit[2311]: NETFILTER_CFG table=mangle:33 family=10 entries=1 op=nft_register_chain pid=2311 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:57:09.586000 audit[2311]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff9dd7d200 a2=0 a3=7fff9dd7d1ec items=0 ppid=2273 pid=2311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:09.586000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Nov 1 01:57:09.586000 audit[2312]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=2312 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:57:09.586000 audit[2312]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd43ec91c0 a2=0 a3=7ffd43ec91ac items=0 ppid=2273 pid=2312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:09.586000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Nov 1 01:57:09.586000 audit[2313]: NETFILTER_CFG table=nat:35 family=10 entries=2 op=nft_register_chain pid=2313 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:57:09.586000 audit[2313]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffd5b5f5120 a2=0 a3=7ffd5b5f510c items=0 ppid=2273 pid=2313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:09.586000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Nov 1 01:57:09.586000 audit[2314]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_chain pid=2314 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:57:09.586000 audit[2314]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc1d246340 a2=0 a3=7ffc1d24632c items=0 ppid=2273 pid=2314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:09.586000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Nov 1 01:57:09.587592 kubelet[2273]: W1101 01:57:09.587570 2273 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.90.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.90.71:6443: connect: connection refused Nov 1 01:57:09.587618 kubelet[2273]: E1101 01:57:09.587600 2273 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.90.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.90.71:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:57:09.587000 audit[2315]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=2315 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:57:09.587000 audit[2315]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffdc9c5ac70 a2=0 a3=7ffdc9c5ac5c items=0 ppid=2273 pid=2315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:09.587000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Nov 1 01:57:09.688429 kubelet[2273]: I1101 01:57:09.688309 2273 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:09.689651 kubelet[2273]: E1101 01:57:09.689609 2273 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.90.71:6443/api/v1/nodes\": dial tcp 139.178.90.71:6443: connect: connection refused" node="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:09.695156 kubelet[2273]: E1101 01:57:09.695125 2273 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-0f05b56927\" not found" node="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:09.698299 kubelet[2273]: E1101 01:57:09.698233 2273 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-0f05b56927\" not found" node="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:09.700513 kubelet[2273]: E1101 01:57:09.700439 2273 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-0f05b56927\" not found" node="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:09.753712 kubelet[2273]: I1101 01:57:09.753579 2273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ebb7321aec8a1b17d8f8880fe6c3eda7-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-0f05b56927\" (UID: \"ebb7321aec8a1b17d8f8880fe6c3eda7\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:09.753712 kubelet[2273]: I1101 01:57:09.753684 2273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/238da1f025a0b84d0594f5f82a54746c-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-0f05b56927\" (UID: \"238da1f025a0b84d0594f5f82a54746c\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:09.754113 kubelet[2273]: I1101 01:57:09.753768 2273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/238da1f025a0b84d0594f5f82a54746c-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-0f05b56927\" (UID: \"238da1f025a0b84d0594f5f82a54746c\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:09.754113 kubelet[2273]: I1101 01:57:09.753824 2273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/238da1f025a0b84d0594f5f82a54746c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-0f05b56927\" (UID: \"238da1f025a0b84d0594f5f82a54746c\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:09.754113 kubelet[2273]: I1101 01:57:09.753875 2273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ebb7321aec8a1b17d8f8880fe6c3eda7-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-0f05b56927\" (UID: \"ebb7321aec8a1b17d8f8880fe6c3eda7\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:09.754113 kubelet[2273]: I1101 01:57:09.753928 2273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ebb7321aec8a1b17d8f8880fe6c3eda7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-0f05b56927\" (UID: \"ebb7321aec8a1b17d8f8880fe6c3eda7\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:09.754113 kubelet[2273]: I1101 01:57:09.753976 2273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/238da1f025a0b84d0594f5f82a54746c-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-0f05b56927\" (UID: \"238da1f025a0b84d0594f5f82a54746c\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:09.754636 kubelet[2273]: I1101 01:57:09.754024 2273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/238da1f025a0b84d0594f5f82a54746c-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-0f05b56927\" (UID: \"238da1f025a0b84d0594f5f82a54746c\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:09.754636 kubelet[2273]: I1101 01:57:09.754084 2273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/abb9412702bb423f5b14a8df7430a6a3-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-0f05b56927\" (UID: \"abb9412702bb423f5b14a8df7430a6a3\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:09.853056 kubelet[2273]: E1101 01:57:09.852926 2273 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.90.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-0f05b56927?timeout=10s\": dial tcp 139.178.90.71:6443: connect: connection refused" interval="800ms" Nov 1 01:57:09.894439 kubelet[2273]: I1101 01:57:09.894366 2273 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:09.895228 kubelet[2273]: E1101 01:57:09.895123 2273 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.90.71:6443/api/v1/nodes\": dial tcp 139.178.90.71:6443: connect: connection refused" node="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:09.997745 env[1679]: time="2025-11-01T01:57:09.997599163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-0f05b56927,Uid:ebb7321aec8a1b17d8f8880fe6c3eda7,Namespace:kube-system,Attempt:0,}" Nov 1 01:57:10.000546 env[1679]: time="2025-11-01T01:57:10.000436664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-0f05b56927,Uid:238da1f025a0b84d0594f5f82a54746c,Namespace:kube-system,Attempt:0,}" Nov 1 01:57:10.002390 env[1679]: time="2025-11-01T01:57:10.002289122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-0f05b56927,Uid:abb9412702bb423f5b14a8df7430a6a3,Namespace:kube-system,Attempt:0,}" Nov 1 01:57:10.052559 kubelet[2273]: W1101 01:57:10.052413 2273 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.90.71:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.90.71:6443: connect: connection refused Nov 1 01:57:10.052559 kubelet[2273]: E1101 01:57:10.052555 2273 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.90.71:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.90.71:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:57:10.143571 kubelet[2273]: E1101 01:57:10.143276 2273 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.90.71:6443/api/v1/namespaces/default/events\": dial tcp 139.178.90.71:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-n-0f05b56927.1873bf4fec3e86d5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-0f05b56927,UID:ci-3510.3.8-n-0f05b56927,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-0f05b56927,},FirstTimestamp:2025-11-01 01:57:09.218838229 +0000 UTC m=+0.341309304,LastTimestamp:2025-11-01 01:57:09.218838229 +0000 UTC m=+0.341309304,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-0f05b56927,}" Nov 1 01:57:10.178273 kubelet[2273]: W1101 01:57:10.178145 2273 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.90.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.90.71:6443: connect: connection refused Nov 1 01:57:10.178551 kubelet[2273]: E1101 01:57:10.178302 2273 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.90.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.90.71:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:57:10.299712 kubelet[2273]: I1101 01:57:10.299452 2273 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:10.300264 kubelet[2273]: E1101 01:57:10.300170 2273 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.90.71:6443/api/v1/nodes\": dial tcp 139.178.90.71:6443: connect: connection refused" node="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:10.537941 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1715699552.mount: Deactivated successfully. Nov 1 01:57:10.538826 env[1679]: time="2025-11-01T01:57:10.538805999Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:10.539998 env[1679]: time="2025-11-01T01:57:10.539966848Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:10.540630 env[1679]: time="2025-11-01T01:57:10.540618513Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:10.541048 env[1679]: time="2025-11-01T01:57:10.541036420Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:10.541443 env[1679]: time="2025-11-01T01:57:10.541430993Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:10.542518 env[1679]: time="2025-11-01T01:57:10.542506750Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:10.542888 env[1679]: time="2025-11-01T01:57:10.542877584Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:10.544493 env[1679]: time="2025-11-01T01:57:10.544450363Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:10.545623 env[1679]: time="2025-11-01T01:57:10.545608007Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:10.546345 env[1679]: time="2025-11-01T01:57:10.546330929Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:10.546700 env[1679]: time="2025-11-01T01:57:10.546690542Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:10.547053 env[1679]: time="2025-11-01T01:57:10.547043347Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:10.551514 env[1679]: time="2025-11-01T01:57:10.551442962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:57:10.551514 env[1679]: time="2025-11-01T01:57:10.551466448Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:57:10.551514 env[1679]: time="2025-11-01T01:57:10.551473690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:57:10.551649 env[1679]: time="2025-11-01T01:57:10.551542719Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/94011f041f608763fe6bbf2f1ecdcf27ec6f77b4d3fa8d046f229e21e3c43d27 pid=2327 runtime=io.containerd.runc.v2 Nov 1 01:57:10.552014 env[1679]: time="2025-11-01T01:57:10.551977984Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:57:10.552014 env[1679]: time="2025-11-01T01:57:10.552004835Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:57:10.552090 env[1679]: time="2025-11-01T01:57:10.552017402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:57:10.552148 env[1679]: time="2025-11-01T01:57:10.552114560Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2c0bf7d9660d8d38aaec77863534d7b3705c8e7247291e823ce414c42efc763e pid=2335 runtime=io.containerd.runc.v2 Nov 1 01:57:10.553660 env[1679]: time="2025-11-01T01:57:10.553616341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:57:10.553660 env[1679]: time="2025-11-01T01:57:10.553645473Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:57:10.553660 env[1679]: time="2025-11-01T01:57:10.553655829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:57:10.553803 env[1679]: time="2025-11-01T01:57:10.553744481Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c98a611fd431a94ead0e2939ed461f7890c11e1cc20d9362e5524e609cf2985f pid=2357 runtime=io.containerd.runc.v2 Nov 1 01:57:10.581098 env[1679]: time="2025-11-01T01:57:10.581062121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-0f05b56927,Uid:ebb7321aec8a1b17d8f8880fe6c3eda7,Namespace:kube-system,Attempt:0,} returns sandbox id \"94011f041f608763fe6bbf2f1ecdcf27ec6f77b4d3fa8d046f229e21e3c43d27\"" Nov 1 01:57:10.581203 env[1679]: time="2025-11-01T01:57:10.581087478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-0f05b56927,Uid:238da1f025a0b84d0594f5f82a54746c,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c0bf7d9660d8d38aaec77863534d7b3705c8e7247291e823ce414c42efc763e\"" Nov 1 01:57:10.582491 env[1679]: time="2025-11-01T01:57:10.582473938Z" level=info msg="CreateContainer within sandbox \"94011f041f608763fe6bbf2f1ecdcf27ec6f77b4d3fa8d046f229e21e3c43d27\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 01:57:10.582534 env[1679]: time="2025-11-01T01:57:10.582473067Z" level=info msg="CreateContainer within sandbox \"2c0bf7d9660d8d38aaec77863534d7b3705c8e7247291e823ce414c42efc763e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 01:57:10.582970 env[1679]: time="2025-11-01T01:57:10.582955500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-0f05b56927,Uid:abb9412702bb423f5b14a8df7430a6a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"c98a611fd431a94ead0e2939ed461f7890c11e1cc20d9362e5524e609cf2985f\"" Nov 1 01:57:10.584313 env[1679]: time="2025-11-01T01:57:10.584300049Z" level=info msg="CreateContainer within sandbox \"c98a611fd431a94ead0e2939ed461f7890c11e1cc20d9362e5524e609cf2985f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 01:57:10.588394 env[1679]: time="2025-11-01T01:57:10.588377300Z" level=info msg="CreateContainer within sandbox \"94011f041f608763fe6bbf2f1ecdcf27ec6f77b4d3fa8d046f229e21e3c43d27\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"44b49a8ced21d3daeee540722fa109f30efdad363b9c02c4989ef3e05cf53f5c\"" Nov 1 01:57:10.588575 env[1679]: time="2025-11-01T01:57:10.588562655Z" level=info msg="StartContainer for \"44b49a8ced21d3daeee540722fa109f30efdad363b9c02c4989ef3e05cf53f5c\"" Nov 1 01:57:10.589298 env[1679]: time="2025-11-01T01:57:10.589284848Z" level=info msg="CreateContainer within sandbox \"2c0bf7d9660d8d38aaec77863534d7b3705c8e7247291e823ce414c42efc763e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"27ea94f0ba986bbfe9b4ad25f19f48d0711954f3a1ef6c7e4c341dcb33584960\"" Nov 1 01:57:10.589425 env[1679]: time="2025-11-01T01:57:10.589413481Z" level=info msg="StartContainer for \"27ea94f0ba986bbfe9b4ad25f19f48d0711954f3a1ef6c7e4c341dcb33584960\"" Nov 1 01:57:10.590481 env[1679]: time="2025-11-01T01:57:10.590466377Z" level=info msg="CreateContainer within sandbox \"c98a611fd431a94ead0e2939ed461f7890c11e1cc20d9362e5524e609cf2985f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"379f61b4ecf42e4771d2d0760c8a3f71eab3393027f486850c03eef9561506e4\"" Nov 1 01:57:10.590616 env[1679]: time="2025-11-01T01:57:10.590604795Z" level=info msg="StartContainer for \"379f61b4ecf42e4771d2d0760c8a3f71eab3393027f486850c03eef9561506e4\"" Nov 1 01:57:10.622155 env[1679]: time="2025-11-01T01:57:10.622125182Z" level=info msg="StartContainer for \"379f61b4ecf42e4771d2d0760c8a3f71eab3393027f486850c03eef9561506e4\" returns successfully" Nov 1 01:57:10.622284 env[1679]: time="2025-11-01T01:57:10.622271808Z" level=info msg="StartContainer for \"27ea94f0ba986bbfe9b4ad25f19f48d0711954f3a1ef6c7e4c341dcb33584960\" returns successfully" Nov 1 01:57:10.622332 env[1679]: time="2025-11-01T01:57:10.622274356Z" level=info msg="StartContainer for \"44b49a8ced21d3daeee540722fa109f30efdad363b9c02c4989ef3e05cf53f5c\" returns successfully" Nov 1 01:57:11.102450 kubelet[2273]: I1101 01:57:11.102404 2273 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:11.125026 kubelet[2273]: E1101 01:57:11.124998 2273 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.8-n-0f05b56927\" not found" node="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:11.202553 kubelet[2273]: I1101 01:57:11.202508 2273 apiserver.go:52] "Watching apiserver" Nov 1 01:57:11.227654 kubelet[2273]: I1101 01:57:11.227639 2273 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:11.251244 kubelet[2273]: I1101 01:57:11.251231 2273 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:11.251321 kubelet[2273]: I1101 01:57:11.251309 2273 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 01:57:11.304157 kubelet[2273]: E1101 01:57:11.304120 2273 kubelet.go:3196] "Failed creating a mirror pod" err="namespaces \"kube-system\" not found" pod="kube-system/kube-apiserver-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:11.304157 kubelet[2273]: I1101 01:57:11.304133 2273 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:11.357145 kubelet[2273]: E1101 01:57:11.357070 2273 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-n-0f05b56927\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:11.357145 kubelet[2273]: I1101 01:57:11.357092 2273 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:11.358335 kubelet[2273]: E1101 01:57:11.358286 2273 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-0f05b56927\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:11.594879 kubelet[2273]: I1101 01:57:11.594776 2273 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:11.597629 kubelet[2273]: I1101 01:57:11.597588 2273 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:11.599369 kubelet[2273]: E1101 01:57:11.599295 2273 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-0f05b56927\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:11.600693 kubelet[2273]: I1101 01:57:11.600627 2273 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:11.602031 kubelet[2273]: E1101 01:57:11.601959 2273 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-n-0f05b56927\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:11.604613 kubelet[2273]: E1101 01:57:11.604524 2273 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-0f05b56927\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:12.602830 kubelet[2273]: I1101 01:57:12.602771 2273 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:12.603712 kubelet[2273]: I1101 01:57:12.603008 2273 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:12.603712 kubelet[2273]: I1101 01:57:12.603555 2273 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:12.610262 kubelet[2273]: W1101 01:57:12.610196 2273 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 01:57:12.611246 kubelet[2273]: W1101 01:57:12.611184 2273 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 01:57:12.611940 kubelet[2273]: W1101 01:57:12.611899 2273 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 01:57:13.604609 kubelet[2273]: I1101 01:57:13.604539 2273 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:13.605718 kubelet[2273]: I1101 01:57:13.604941 2273 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:13.613702 kubelet[2273]: W1101 01:57:13.613625 2273 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 01:57:13.614042 kubelet[2273]: E1101 01:57:13.613788 2273 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-0f05b56927\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:13.615235 kubelet[2273]: W1101 01:57:13.615107 2273 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 01:57:13.615605 kubelet[2273]: E1101 01:57:13.615267 2273 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-0f05b56927\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:13.896986 systemd[1]: Reloading. Nov 1 01:57:13.955206 /usr/lib/systemd/system-generators/torcx-generator[2603]: time="2025-11-01T01:57:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 01:57:13.955230 /usr/lib/systemd/system-generators/torcx-generator[2603]: time="2025-11-01T01:57:13Z" level=info msg="torcx already run" Nov 1 01:57:14.051175 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 01:57:14.051188 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 01:57:14.066446 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 01:57:14.130187 systemd[1]: Stopping kubelet.service... Nov 1 01:57:14.154597 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 01:57:14.154767 systemd[1]: Stopped kubelet.service. Nov 1 01:57:14.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:57:14.155743 systemd[1]: Starting kubelet.service... Nov 1 01:57:14.181002 kernel: kauditd_printk_skb: 42 callbacks suppressed Nov 1 01:57:14.181060 kernel: audit: type=1131 audit(1761962234.153:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:57:14.369428 systemd[1]: Started kubelet.service. Nov 1 01:57:14.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:57:14.390694 kubelet[2679]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 01:57:14.390694 kubelet[2679]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 01:57:14.390694 kubelet[2679]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 01:57:14.391023 kubelet[2679]: I1101 01:57:14.390740 2679 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 01:57:14.396139 kubelet[2679]: I1101 01:57:14.396098 2679 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 01:57:14.396139 kubelet[2679]: I1101 01:57:14.396110 2679 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 01:57:14.396249 kubelet[2679]: I1101 01:57:14.396244 2679 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 01:57:14.396971 kubelet[2679]: I1101 01:57:14.396932 2679 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 1 01:57:14.398100 kubelet[2679]: I1101 01:57:14.398065 2679 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 01:57:14.399904 kubelet[2679]: E1101 01:57:14.399886 2679 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 01:57:14.399904 kubelet[2679]: I1101 01:57:14.399904 2679 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 01:57:14.435335 kernel: audit: type=1130 audit(1761962234.369:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:57:14.440067 kubelet[2679]: I1101 01:57:14.440055 2679 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 01:57:14.440399 kubelet[2679]: I1101 01:57:14.440356 2679 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 01:57:14.440499 kubelet[2679]: I1101 01:57:14.440373 2679 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-0f05b56927","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 1 01:57:14.440499 kubelet[2679]: I1101 01:57:14.440481 2679 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 01:57:14.440499 kubelet[2679]: I1101 01:57:14.440487 2679 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 01:57:14.440602 kubelet[2679]: I1101 01:57:14.440516 2679 state_mem.go:36] "Initialized new in-memory state store" Nov 1 01:57:14.440621 kubelet[2679]: I1101 01:57:14.440603 2679 kubelet.go:446] "Attempting to sync node with API server" Nov 1 01:57:14.440621 kubelet[2679]: I1101 01:57:14.440613 2679 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 01:57:14.440654 kubelet[2679]: I1101 01:57:14.440623 2679 kubelet.go:352] "Adding apiserver pod source" Nov 1 01:57:14.440654 kubelet[2679]: I1101 01:57:14.440631 2679 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 01:57:14.441068 kubelet[2679]: I1101 01:57:14.441034 2679 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 01:57:14.441275 kubelet[2679]: I1101 01:57:14.441267 2679 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 01:57:14.441496 kubelet[2679]: I1101 01:57:14.441490 2679 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 01:57:14.441521 kubelet[2679]: I1101 01:57:14.441505 2679 server.go:1287] "Started kubelet" Nov 1 01:57:14.441643 kubelet[2679]: I1101 01:57:14.441622 2679 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 01:57:14.441790 kubelet[2679]: I1101 01:57:14.441681 2679 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 01:57:14.441890 kubelet[2679]: I1101 01:57:14.441881 2679 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 01:57:14.441000 audit[2679]: AVC avc: denied { mac_admin } for pid=2679 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:14.442505 kubelet[2679]: I1101 01:57:14.442414 2679 kubelet.go:1507] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins_registry: invalid argument" Nov 1 01:57:14.442545 kubelet[2679]: I1101 01:57:14.442534 2679 kubelet.go:1511] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins: invalid argument" Nov 1 01:57:14.442575 kubelet[2679]: I1101 01:57:14.442565 2679 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 01:57:14.442899 kubelet[2679]: I1101 01:57:14.442874 2679 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 01:57:14.442963 kubelet[2679]: I1101 01:57:14.442952 2679 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 01:57:14.443039 kubelet[2679]: I1101 01:57:14.442943 2679 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 01:57:14.443171 kubelet[2679]: E1101 01:57:14.442881 2679 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-0f05b56927\" not found" Nov 1 01:57:14.443313 kubelet[2679]: I1101 01:57:14.443298 2679 reconciler.go:26] "Reconciler: start to sync state" Nov 1 01:57:14.441000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 01:57:14.506080 kubelet[2679]: E1101 01:57:14.506065 2679 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 01:57:14.506283 kubelet[2679]: I1101 01:57:14.506271 2679 factory.go:221] Registration of the systemd container factory successfully Nov 1 01:57:14.506320 kubelet[2679]: I1101 01:57:14.506303 2679 server.go:479] "Adding debug handlers to kubelet server" Nov 1 01:57:14.506365 kubelet[2679]: I1101 01:57:14.506336 2679 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 01:57:14.506923 kubelet[2679]: I1101 01:57:14.506914 2679 factory.go:221] Registration of the containerd container factory successfully Nov 1 01:57:14.510179 kubelet[2679]: I1101 01:57:14.510154 2679 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 01:57:14.510685 kubelet[2679]: I1101 01:57:14.510676 2679 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 01:57:14.510729 kubelet[2679]: I1101 01:57:14.510694 2679 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 01:57:14.510729 kubelet[2679]: I1101 01:57:14.510716 2679 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 01:57:14.510729 kubelet[2679]: I1101 01:57:14.510730 2679 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 01:57:14.511080 kubelet[2679]: E1101 01:57:14.510827 2679 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 01:57:14.525834 kubelet[2679]: I1101 01:57:14.525817 2679 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 01:57:14.525834 kubelet[2679]: I1101 01:57:14.525827 2679 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 01:57:14.525834 kubelet[2679]: I1101 01:57:14.525836 2679 state_mem.go:36] "Initialized new in-memory state store" Nov 1 01:57:14.525960 kubelet[2679]: I1101 01:57:14.525925 2679 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 01:57:14.525960 kubelet[2679]: I1101 01:57:14.525931 2679 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 01:57:14.525960 kubelet[2679]: I1101 01:57:14.525942 2679 policy_none.go:49] "None policy: Start" Nov 1 01:57:14.525960 kubelet[2679]: I1101 01:57:14.525948 2679 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 01:57:14.525960 kubelet[2679]: I1101 01:57:14.525953 2679 state_mem.go:35] "Initializing new in-memory state store" Nov 1 01:57:14.526084 kubelet[2679]: I1101 01:57:14.526076 2679 state_mem.go:75] "Updated machine memory state" Nov 1 01:57:14.527118 kubelet[2679]: I1101 01:57:14.527107 2679 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 01:57:14.527162 kubelet[2679]: I1101 01:57:14.527142 2679 server.go:94] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/device-plugins/: invalid argument" Nov 1 01:57:14.527249 kubelet[2679]: I1101 01:57:14.527242 2679 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 01:57:14.527292 kubelet[2679]: I1101 01:57:14.527250 2679 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 01:57:14.527367 kubelet[2679]: I1101 01:57:14.527356 2679 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 01:57:14.527637 kubelet[2679]: E1101 01:57:14.527628 2679 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 01:57:14.538110 kernel: audit: type=1400 audit(1761962234.441:206): avc: denied { mac_admin } for pid=2679 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:14.538145 kernel: audit: type=1401 audit(1761962234.441:206): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 01:57:14.538157 kernel: audit: type=1300 audit(1761962234.441:206): arch=c000003e syscall=188 success=no exit=-22 a0=c000fa4360 a1=c000f982d0 a2=c000fa4330 a3=25 items=0 ppid=1 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:14.441000 audit[2679]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000fa4360 a1=c000f982d0 a2=c000fa4330 a3=25 items=0 ppid=1 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:14.611370 kubelet[2679]: I1101 01:57:14.611331 2679 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:14.611424 kubelet[2679]: I1101 01:57:14.611402 2679 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:14.611503 kubelet[2679]: I1101 01:57:14.611458 2679 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:14.615927 kubelet[2679]: W1101 01:57:14.615888 2679 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 01:57:14.615927 kubelet[2679]: E1101 01:57:14.615913 2679 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-0f05b56927\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:14.616655 kubelet[2679]: W1101 01:57:14.616647 2679 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 01:57:14.616692 kubelet[2679]: W1101 01:57:14.616661 2679 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 01:57:14.616692 kubelet[2679]: E1101 01:57:14.616676 2679 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-n-0f05b56927\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:14.616692 kubelet[2679]: E1101 01:57:14.616687 2679 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-0f05b56927\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:14.629293 kubelet[2679]: I1101 01:57:14.629257 2679 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:14.441000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 01:57:14.634202 kubelet[2679]: I1101 01:57:14.634193 2679 kubelet_node_status.go:124] "Node was previously registered" node="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:14.634243 kubelet[2679]: I1101 01:57:14.634222 2679 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:14.644361 kubelet[2679]: I1101 01:57:14.644347 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/238da1f025a0b84d0594f5f82a54746c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-0f05b56927\" (UID: \"238da1f025a0b84d0594f5f82a54746c\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:14.644413 kubelet[2679]: I1101 01:57:14.644365 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/238da1f025a0b84d0594f5f82a54746c-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-0f05b56927\" (UID: \"238da1f025a0b84d0594f5f82a54746c\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:14.644413 kubelet[2679]: I1101 01:57:14.644376 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/238da1f025a0b84d0594f5f82a54746c-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-0f05b56927\" (UID: \"238da1f025a0b84d0594f5f82a54746c\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:14.644413 kubelet[2679]: I1101 01:57:14.644386 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ebb7321aec8a1b17d8f8880fe6c3eda7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-0f05b56927\" (UID: \"ebb7321aec8a1b17d8f8880fe6c3eda7\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:14.644413 kubelet[2679]: I1101 01:57:14.644397 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/238da1f025a0b84d0594f5f82a54746c-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-0f05b56927\" (UID: \"238da1f025a0b84d0594f5f82a54746c\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:14.644413 kubelet[2679]: I1101 01:57:14.644406 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/238da1f025a0b84d0594f5f82a54746c-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-0f05b56927\" (UID: \"238da1f025a0b84d0594f5f82a54746c\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:14.644510 kubelet[2679]: I1101 01:57:14.644415 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/abb9412702bb423f5b14a8df7430a6a3-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-0f05b56927\" (UID: \"abb9412702bb423f5b14a8df7430a6a3\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:14.644510 kubelet[2679]: I1101 01:57:14.644428 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ebb7321aec8a1b17d8f8880fe6c3eda7-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-0f05b56927\" (UID: \"ebb7321aec8a1b17d8f8880fe6c3eda7\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:14.644510 kubelet[2679]: I1101 01:57:14.644437 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ebb7321aec8a1b17d8f8880fe6c3eda7-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-0f05b56927\" (UID: \"ebb7321aec8a1b17d8f8880fe6c3eda7\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:14.723488 kernel: audit: type=1327 audit(1761962234.441:206): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 01:57:14.723521 kernel: audit: type=1400 audit(1761962234.441:207): avc: denied { mac_admin } for pid=2679 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:14.441000 audit[2679]: AVC avc: denied { mac_admin } for pid=2679 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:14.786309 kernel: audit: type=1401 audit(1761962234.441:207): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 01:57:14.441000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 01:57:14.818361 kernel: audit: type=1300 audit(1761962234.441:207): arch=c000003e syscall=188 success=no exit=-22 a0=c000f003c0 a1=c000f0c120 a2=c000f02420 a3=25 items=0 ppid=1 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:14.441000 audit[2679]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000f003c0 a1=c000f0c120 a2=c000f02420 a3=25 items=0 ppid=1 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:14.911770 kernel: audit: type=1327 audit(1761962234.441:207): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 01:57:14.441000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 01:57:14.525000 audit[2679]: AVC avc: denied { mac_admin } for pid=2679 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:14.525000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 01:57:14.525000 audit[2679]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000a1d4d0 a1=c0011f4fa8 a2=c000a1d4a0 a3=25 items=0 ppid=1 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:14.525000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 01:57:15.441847 kubelet[2679]: I1101 01:57:15.441768 2679 apiserver.go:52] "Watching apiserver" Nov 1 01:57:15.443983 kubelet[2679]: I1101 01:57:15.443945 2679 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 01:57:15.515442 kubelet[2679]: I1101 01:57:15.515422 2679 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:15.515567 kubelet[2679]: I1101 01:57:15.515456 2679 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:15.519424 kubelet[2679]: W1101 01:57:15.519403 2679 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 01:57:15.519504 kubelet[2679]: E1101 01:57:15.519446 2679 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-0f05b56927\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:15.519504 kubelet[2679]: W1101 01:57:15.519478 2679 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 01:57:15.519552 kubelet[2679]: E1101 01:57:15.519503 2679 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-0f05b56927\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.8-n-0f05b56927" Nov 1 01:57:15.527959 kubelet[2679]: I1101 01:57:15.527888 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.8-n-0f05b56927" podStartSLOduration=3.527865858 podStartE2EDuration="3.527865858s" podCreationTimestamp="2025-11-01 01:57:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:57:15.527853448 +0000 UTC m=+1.155489761" watchObservedRunningTime="2025-11-01 01:57:15.527865858 +0000 UTC m=+1.155502170" Nov 1 01:57:15.533175 kubelet[2679]: I1101 01:57:15.533144 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-0f05b56927" podStartSLOduration=3.533133093 podStartE2EDuration="3.533133093s" podCreationTimestamp="2025-11-01 01:57:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:57:15.532885818 +0000 UTC m=+1.160522129" watchObservedRunningTime="2025-11-01 01:57:15.533133093 +0000 UTC m=+1.160769400" Nov 1 01:57:15.537774 kubelet[2679]: I1101 01:57:15.537699 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.8-n-0f05b56927" podStartSLOduration=3.537688069 podStartE2EDuration="3.537688069s" podCreationTimestamp="2025-11-01 01:57:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:57:15.537671358 +0000 UTC m=+1.165307669" watchObservedRunningTime="2025-11-01 01:57:15.537688069 +0000 UTC m=+1.165324377" Nov 1 01:57:18.186392 kubelet[2679]: I1101 01:57:18.186299 2679 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 01:57:18.187488 env[1679]: time="2025-11-01T01:57:18.187033554Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 01:57:18.188426 kubelet[2679]: I1101 01:57:18.187501 2679 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 01:57:19.272694 kubelet[2679]: I1101 01:57:19.272567 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/13332041-6d79-4c00-a9a0-70f8d94c0bae-kube-proxy\") pod \"kube-proxy-r6brs\" (UID: \"13332041-6d79-4c00-a9a0-70f8d94c0bae\") " pod="kube-system/kube-proxy-r6brs" Nov 1 01:57:19.273671 kubelet[2679]: I1101 01:57:19.272715 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13332041-6d79-4c00-a9a0-70f8d94c0bae-xtables-lock\") pod \"kube-proxy-r6brs\" (UID: \"13332041-6d79-4c00-a9a0-70f8d94c0bae\") " pod="kube-system/kube-proxy-r6brs" Nov 1 01:57:19.273671 kubelet[2679]: I1101 01:57:19.272818 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-748d9\" (UniqueName: \"kubernetes.io/projected/13332041-6d79-4c00-a9a0-70f8d94c0bae-kube-api-access-748d9\") pod \"kube-proxy-r6brs\" (UID: \"13332041-6d79-4c00-a9a0-70f8d94c0bae\") " pod="kube-system/kube-proxy-r6brs" Nov 1 01:57:19.273671 kubelet[2679]: I1101 01:57:19.272889 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13332041-6d79-4c00-a9a0-70f8d94c0bae-lib-modules\") pod \"kube-proxy-r6brs\" (UID: \"13332041-6d79-4c00-a9a0-70f8d94c0bae\") " pod="kube-system/kube-proxy-r6brs" Nov 1 01:57:19.373720 kubelet[2679]: I1101 01:57:19.373628 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c4c39299-71ad-4ad8-a631-7edc8309f43c-var-lib-calico\") pod \"tigera-operator-7dcd859c48-mqst4\" (UID: \"c4c39299-71ad-4ad8-a631-7edc8309f43c\") " pod="tigera-operator/tigera-operator-7dcd859c48-mqst4" Nov 1 01:57:19.373720 kubelet[2679]: I1101 01:57:19.373697 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lcdv\" (UniqueName: \"kubernetes.io/projected/c4c39299-71ad-4ad8-a631-7edc8309f43c-kube-api-access-2lcdv\") pod \"tigera-operator-7dcd859c48-mqst4\" (UID: \"c4c39299-71ad-4ad8-a631-7edc8309f43c\") " pod="tigera-operator/tigera-operator-7dcd859c48-mqst4" Nov 1 01:57:19.384234 kubelet[2679]: I1101 01:57:19.384185 2679 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 1 01:57:19.528851 env[1679]: time="2025-11-01T01:57:19.528629179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r6brs,Uid:13332041-6d79-4c00-a9a0-70f8d94c0bae,Namespace:kube-system,Attempt:0,}" Nov 1 01:57:19.550976 env[1679]: time="2025-11-01T01:57:19.550806843Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:57:19.550976 env[1679]: time="2025-11-01T01:57:19.550901837Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:57:19.550976 env[1679]: time="2025-11-01T01:57:19.550940640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:57:19.551452 env[1679]: time="2025-11-01T01:57:19.551344806Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4f176b30788ab158684fcfd1e1d09382662f1a63f00d0c1a7c98e01c2000232e pid=2769 runtime=io.containerd.runc.v2 Nov 1 01:57:19.586968 env[1679]: time="2025-11-01T01:57:19.586899120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r6brs,Uid:13332041-6d79-4c00-a9a0-70f8d94c0bae,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f176b30788ab158684fcfd1e1d09382662f1a63f00d0c1a7c98e01c2000232e\"" Nov 1 01:57:19.588993 env[1679]: time="2025-11-01T01:57:19.588943567Z" level=info msg="CreateContainer within sandbox \"4f176b30788ab158684fcfd1e1d09382662f1a63f00d0c1a7c98e01c2000232e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 01:57:19.596455 env[1679]: time="2025-11-01T01:57:19.596392904Z" level=info msg="CreateContainer within sandbox \"4f176b30788ab158684fcfd1e1d09382662f1a63f00d0c1a7c98e01c2000232e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"51aecae02b3e602c58d660e595b5017549e42e6ac1145a94198bec92c12d291c\"" Nov 1 01:57:19.596829 env[1679]: time="2025-11-01T01:57:19.596770149Z" level=info msg="StartContainer for \"51aecae02b3e602c58d660e595b5017549e42e6ac1145a94198bec92c12d291c\"" Nov 1 01:57:19.636662 env[1679]: time="2025-11-01T01:57:19.636598357Z" level=info msg="StartContainer for \"51aecae02b3e602c58d660e595b5017549e42e6ac1145a94198bec92c12d291c\" returns successfully" Nov 1 01:57:19.670333 env[1679]: time="2025-11-01T01:57:19.670298382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-mqst4,Uid:c4c39299-71ad-4ad8-a631-7edc8309f43c,Namespace:tigera-operator,Attempt:0,}" Nov 1 01:57:19.677314 env[1679]: time="2025-11-01T01:57:19.677277886Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:57:19.677314 env[1679]: time="2025-11-01T01:57:19.677303143Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:57:19.677314 env[1679]: time="2025-11-01T01:57:19.677312480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:57:19.677436 env[1679]: time="2025-11-01T01:57:19.677415864Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/39df6050c257c7ec30ceead9e16bf0651b29a639b5cefedb27656b6e26f0b4e6 pid=2856 runtime=io.containerd.runc.v2 Nov 1 01:57:19.711080 env[1679]: time="2025-11-01T01:57:19.711053974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-mqst4,Uid:c4c39299-71ad-4ad8-a631-7edc8309f43c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"39df6050c257c7ec30ceead9e16bf0651b29a639b5cefedb27656b6e26f0b4e6\"" Nov 1 01:57:19.712525 env[1679]: time="2025-11-01T01:57:19.712505800Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 1 01:57:19.777000 audit[2922]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2922 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:57:19.804599 kernel: kauditd_printk_skb: 4 callbacks suppressed Nov 1 01:57:19.804666 kernel: audit: type=1325 audit(1761962239.777:209): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2922 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:57:19.777000 audit[2922]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff97dd64e0 a2=0 a3=7fff97dd64cc items=0 ppid=2820 pid=2922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:19.955937 kernel: audit: type=1300 audit(1761962239.777:209): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff97dd64e0 a2=0 a3=7fff97dd64cc items=0 ppid=2820 pid=2922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:19.955975 kernel: audit: type=1327 audit(1761962239.777:209): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Nov 1 01:57:19.777000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Nov 1 01:57:20.013276 kernel: audit: type=1325 audit(1761962239.778:210): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2923 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:57:19.778000 audit[2923]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2923 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:57:19.778000 audit[2923]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff3d94ab10 a2=0 a3=7fff3d94aafc items=0 ppid=2820 pid=2923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:20.165599 kernel: audit: type=1300 audit(1761962239.778:210): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff3d94ab10 a2=0 a3=7fff3d94aafc items=0 ppid=2820 pid=2923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:20.165630 kernel: audit: type=1327 audit(1761962239.778:210): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Nov 1 01:57:19.778000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Nov 1 01:57:20.223003 kernel: audit: type=1325 audit(1761962239.778:211): table=nat:40 family=2 entries=1 op=nft_register_chain pid=2924 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:57:19.778000 audit[2924]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_chain pid=2924 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:57:20.280141 kernel: audit: type=1300 audit(1761962239.778:211): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe40fc7df0 a2=0 a3=7ffe40fc7ddc items=0 ppid=2820 pid=2924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:19.778000 audit[2924]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe40fc7df0 a2=0 a3=7ffe40fc7ddc items=0 ppid=2820 pid=2924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:20.375557 kernel: audit: type=1327 audit(1761962239.778:211): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Nov 1 01:57:19.778000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Nov 1 01:57:20.432770 kernel: audit: type=1325 audit(1761962239.779:212): table=nat:41 family=10 entries=1 op=nft_register_chain pid=2925 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:57:19.779000 audit[2925]: NETFILTER_CFG table=nat:41 family=10 entries=1 op=nft_register_chain pid=2925 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:57:19.779000 audit[2925]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe53ad4950 a2=0 a3=7ffe53ad493c items=0 ppid=2820 pid=2925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:19.779000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Nov 1 01:57:19.779000 audit[2927]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_chain pid=2927 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:57:19.779000 audit[2927]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffc4f32e10 a2=0 a3=7fffc4f32dfc items=0 ppid=2820 pid=2927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:19.779000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Nov 1 01:57:19.780000 audit[2928]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2928 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:57:19.780000 audit[2928]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdf1f8c930 a2=0 a3=7ffdf1f8c91c items=0 ppid=2820 pid=2928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:19.780000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Nov 1 01:57:19.879000 audit[2929]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2929 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:57:19.879000 audit[2929]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffca5c90c30 a2=0 a3=7ffca5c90c1c items=0 ppid=2820 pid=2929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:19.879000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Nov 1 01:57:19.880000 audit[2931]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2931 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:57:19.880000 audit[2931]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd71667e90 a2=0 a3=7ffd71667e7c items=0 ppid=2820 pid=2931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:19.880000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Nov 1 01:57:19.882000 audit[2934]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2934 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:57:19.882000 audit[2934]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff14b88940 a2=0 a3=7fff14b8892c items=0 ppid=2820 pid=2934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:19.882000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Nov 1 01:57:19.883000 audit[2935]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2935 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:57:19.883000 audit[2935]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffccae16d90 a2=0 a3=7ffccae16d7c items=0 ppid=2820 pid=2935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:19.883000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Nov 1 01:57:19.884000 audit[2937]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2937 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:57:19.884000 audit[2937]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd8b4fd000 a2=0 a3=7ffd8b4fcfec items=0 ppid=2820 pid=2937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:19.884000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Nov 1 01:57:19.885000 audit[2938]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2938 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:57:19.885000 audit[2938]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff5d765d40 a2=0 a3=7fff5d765d2c items=0 ppid=2820 pid=2938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:19.885000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Nov 1 01:57:19.886000 audit[2940]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2940 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:57:19.886000 audit[2940]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd9a152a90 a2=0 a3=7ffd9a152a7c items=0 ppid=2820 pid=2940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:19.886000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Nov 1 01:57:19.888000 audit[2943]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2943 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:57:19.888000 audit[2943]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd4b3a6810 a2=0 a3=7ffd4b3a67fc items=0 ppid=2820 pid=2943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:19.888000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Nov 1 01:57:19.889000 audit[2944]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2944 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:57:19.889000 audit[2944]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff21130430 a2=0 a3=7fff2113041c items=0 ppid=2820 pid=2944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:19.889000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Nov 1 01:57:19.890000 audit[2946]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2946 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:57:19.890000 audit[2946]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc19151d40 a2=0 a3=7ffc19151d2c items=0 ppid=2820 pid=2946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:19.890000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Nov 1 01:57:19.890000 audit[2947]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2947 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:57:19.890000 audit[2947]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcf7b7d560 a2=0 a3=7ffcf7b7d54c items=0 ppid=2820 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:19.890000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Nov 1 01:57:20.490000 audit[2949]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2949 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:57:20.490000 audit[2949]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffe6b96b20 a2=0 a3=7fffe6b96b0c items=0 ppid=2820 pid=2949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:20.490000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Nov 1 01:57:20.492000 audit[2952]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2952 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:57:20.492000 audit[2952]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdcbb16940 a2=0 a3=7ffdcbb1692c items=0 ppid=2820 pid=2952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:20.492000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Nov 1 01:57:20.494000 audit[2955]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2955 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:57:20.494000 audit[2955]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffeaaf70fc0 a2=0 a3=7ffeaaf70fac items=0 ppid=2820 pid=2955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:20.494000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Nov 1 01:57:20.494000 audit[2956]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2956 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:57:20.494000 audit[2956]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc2e797450 a2=0 a3=7ffc2e79743c items=0 ppid=2820 pid=2956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:20.494000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Nov 1 01:57:20.495000 audit[2958]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2958 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:57:20.495000 audit[2958]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffea1954ed0 a2=0 a3=7ffea1954ebc items=0 ppid=2820 pid=2958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:20.495000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 01:57:20.497000 audit[2961]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2961 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:57:20.497000 audit[2961]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc7ebf2520 a2=0 a3=7ffc7ebf250c items=0 ppid=2820 pid=2961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:20.497000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 01:57:20.498000 audit[2962]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2962 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:57:20.498000 audit[2962]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff70b96480 a2=0 a3=7fff70b9646c items=0 ppid=2820 pid=2962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:20.498000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Nov 1 01:57:20.499000 audit[2964]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2964 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:57:20.499000 audit[2964]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7fff7ccc3b70 a2=0 a3=7fff7ccc3b5c items=0 ppid=2820 pid=2964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:20.499000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Nov 1 01:57:20.514000 audit[2970]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2970 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:57:20.514000 audit[2970]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd566f97f0 a2=0 a3=7ffd566f97dc items=0 ppid=2820 pid=2970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:20.514000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:57:20.533757 kubelet[2679]: I1101 01:57:20.533726 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-r6brs" podStartSLOduration=1.5337148919999999 podStartE2EDuration="1.533714892s" podCreationTimestamp="2025-11-01 01:57:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:57:20.532706996 +0000 UTC m=+6.160343308" watchObservedRunningTime="2025-11-01 01:57:20.533714892 +0000 UTC m=+6.161351204" Nov 1 01:57:20.535000 audit[2970]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2970 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:57:20.535000 audit[2970]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffd566f97f0 a2=0 a3=7ffd566f97dc items=0 ppid=2820 pid=2970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:20.535000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:57:20.536000 audit[2975]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2975 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:57:20.536000 audit[2975]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffeeecc70d0 a2=0 a3=7ffeeecc70bc items=0 ppid=2820 pid=2975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:20.536000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Nov 1 01:57:20.538000 audit[2977]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2977 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:57:20.538000 audit[2977]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc70446f10 a2=0 a3=7ffc70446efc items=0 ppid=2820 pid=2977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:20.538000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Nov 1 01:57:20.540000 audit[2980]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2980 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:57:20.540000 audit[2980]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc226e8160 a2=0 a3=7ffc226e814c items=0 ppid=2820 pid=2980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:20.540000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Nov 1 01:57:20.541000 audit[2981]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2981 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:57:20.541000 audit[2981]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdb4637b30 a2=0 a3=7ffdb4637b1c items=0 ppid=2820 pid=2981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:20.541000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Nov 1 01:57:20.543000 audit[2983]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2983 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:57:20.543000 audit[2983]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff8d015ee0 a2=0 a3=7fff8d015ecc items=0 ppid=2820 pid=2983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:20.543000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Nov 1 01:57:20.543000 audit[2984]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2984 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:57:20.543000 audit[2984]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe042dccf0 a2=0 a3=7ffe042dccdc items=0 ppid=2820 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:20.543000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Nov 1 01:57:20.545000 audit[2986]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2986 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:57:20.545000 audit[2986]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffde7306730 a2=0 a3=7ffde730671c items=0 ppid=2820 pid=2986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:20.545000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Nov 1 01:57:20.547000 audit[2989]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2989 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:57:20.547000 audit[2989]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffeda722d60 a2=0 a3=7ffeda722d4c items=0 ppid=2820 pid=2989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:20.547000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Nov 1 01:57:20.548000 audit[2990]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2990 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:57:20.548000 audit[2990]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffceb94bb0 a2=0 a3=7fffceb94b9c items=0 ppid=2820 pid=2990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:20.548000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Nov 1 01:57:20.550000 audit[2992]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2992 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:57:20.550000 audit[2992]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd242c0a10 a2=0 a3=7ffd242c09fc items=0 ppid=2820 pid=2992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:20.550000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Nov 1 01:57:20.551000 audit[2993]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2993 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:57:20.551000 audit[2993]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd45ab2500 a2=0 a3=7ffd45ab24ec items=0 ppid=2820 pid=2993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:20.551000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Nov 1 01:57:20.553000 audit[2995]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2995 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:57:20.553000 audit[2995]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcb346d1b0 a2=0 a3=7ffcb346d19c items=0 ppid=2820 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:20.553000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Nov 1 01:57:20.556000 audit[2998]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2998 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:57:20.556000 audit[2998]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffeae073b20 a2=0 a3=7ffeae073b0c items=0 ppid=2820 pid=2998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:20.556000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Nov 1 01:57:20.559000 audit[3001]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=3001 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:57:20.559000 audit[3001]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffff6379430 a2=0 a3=7ffff637941c items=0 ppid=2820 pid=3001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:20.559000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Nov 1 01:57:20.560000 audit[3002]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=3002 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:57:20.560000 audit[3002]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe4cdcf720 a2=0 a3=7ffe4cdcf70c items=0 ppid=2820 pid=3002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:20.560000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Nov 1 01:57:20.562000 audit[3004]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=3004 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:57:20.562000 audit[3004]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffd19769030 a2=0 a3=7ffd1976901c items=0 ppid=2820 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:20.562000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 01:57:20.565000 audit[3007]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=3007 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:57:20.565000 audit[3007]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7fff1f019910 a2=0 a3=7fff1f0198fc items=0 ppid=2820 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:20.565000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 01:57:20.566000 audit[3008]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=3008 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:57:20.566000 audit[3008]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe8bf22720 a2=0 a3=7ffe8bf2270c items=0 ppid=2820 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:20.566000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Nov 1 01:57:20.569000 audit[3010]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=3010 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:57:20.569000 audit[3010]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffdc31874e0 a2=0 a3=7ffdc31874cc items=0 ppid=2820 pid=3010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:20.569000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Nov 1 01:57:20.570000 audit[3011]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3011 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:57:20.570000 audit[3011]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdf5d04780 a2=0 a3=7ffdf5d0476c items=0 ppid=2820 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:20.570000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Nov 1 01:57:20.573000 audit[3013]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3013 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:57:20.573000 audit[3013]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffda8819b90 a2=0 a3=7ffda8819b7c items=0 ppid=2820 pid=3013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:20.573000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Nov 1 01:57:20.579000 audit[3016]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=3016 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:57:20.579000 audit[3016]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffebca44600 a2=0 a3=7ffebca445ec items=0 ppid=2820 pid=3016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:20.579000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Nov 1 01:57:20.584000 audit[3018]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=3018 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Nov 1 01:57:20.584000 audit[3018]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffc1042b0d0 a2=0 a3=7ffc1042b0bc items=0 ppid=2820 pid=3018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:20.584000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:57:20.586000 audit[3018]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=3018 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Nov 1 01:57:20.586000 audit[3018]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffc1042b0d0 a2=0 a3=7ffc1042b0bc items=0 ppid=2820 pid=3018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:20.586000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:57:20.993585 update_engine[1670]: I1101 01:57:20.993472 1670 update_attempter.cc:509] Updating boot flags... Nov 1 01:57:21.293313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3608499177.mount: Deactivated successfully. Nov 1 01:57:21.790816 env[1679]: time="2025-11-01T01:57:21.790763917Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:21.791357 env[1679]: time="2025-11-01T01:57:21.791314135Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:21.791951 env[1679]: time="2025-11-01T01:57:21.791911751Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:21.792602 env[1679]: time="2025-11-01T01:57:21.792563208Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:21.792895 env[1679]: time="2025-11-01T01:57:21.792851504Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 1 01:57:21.794194 env[1679]: time="2025-11-01T01:57:21.794149445Z" level=info msg="CreateContainer within sandbox \"39df6050c257c7ec30ceead9e16bf0651b29a639b5cefedb27656b6e26f0b4e6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 1 01:57:21.798087 env[1679]: time="2025-11-01T01:57:21.798050909Z" level=info msg="CreateContainer within sandbox \"39df6050c257c7ec30ceead9e16bf0651b29a639b5cefedb27656b6e26f0b4e6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c71b58f060af247dc36f0fd689d492edbcd438138f00356fc8d3b9fe4650117d\"" Nov 1 01:57:21.798293 env[1679]: time="2025-11-01T01:57:21.798280207Z" level=info msg="StartContainer for \"c71b58f060af247dc36f0fd689d492edbcd438138f00356fc8d3b9fe4650117d\"" Nov 1 01:57:21.827228 env[1679]: time="2025-11-01T01:57:21.827206472Z" level=info msg="StartContainer for \"c71b58f060af247dc36f0fd689d492edbcd438138f00356fc8d3b9fe4650117d\" returns successfully" Nov 1 01:57:22.545424 kubelet[2679]: I1101 01:57:22.545370 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-mqst4" podStartSLOduration=1.464005305 podStartE2EDuration="3.545359295s" podCreationTimestamp="2025-11-01 01:57:19 +0000 UTC" firstStartedPulling="2025-11-01 01:57:19.712065621 +0000 UTC m=+5.339701931" lastFinishedPulling="2025-11-01 01:57:21.793419612 +0000 UTC m=+7.421055921" observedRunningTime="2025-11-01 01:57:22.545311707 +0000 UTC m=+8.172948019" watchObservedRunningTime="2025-11-01 01:57:22.545359295 +0000 UTC m=+8.172995603" Nov 1 01:57:26.385958 sudo[1929]: pam_unix(sudo:session): session closed for user root Nov 1 01:57:26.385000 audit[1929]: USER_END pid=1929 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 01:57:26.387402 sshd[1924]: pam_unix(sshd:session): session closed for user core Nov 1 01:57:26.389822 systemd[1]: sshd@8-139.178.90.71:22-147.75.109.163:43476.service: Deactivated successfully. Nov 1 01:57:26.390781 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 01:57:26.390812 systemd-logind[1668]: Session 11 logged out. Waiting for processes to exit. Nov 1 01:57:26.391456 systemd-logind[1668]: Removed session 11. Nov 1 01:57:26.413089 kernel: kauditd_printk_skb: 143 callbacks suppressed Nov 1 01:57:26.413175 kernel: audit: type=1106 audit(1761962246.385:260): pid=1929 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 01:57:26.385000 audit[1929]: CRED_DISP pid=1929 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 01:57:26.586429 kernel: audit: type=1104 audit(1761962246.385:261): pid=1929 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 01:57:26.586542 kernel: audit: type=1106 audit(1761962246.387:262): pid=1924 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:57:26.387000 audit[1924]: USER_END pid=1924 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:57:26.387000 audit[1924]: CRED_DISP pid=1924 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:57:26.769500 kernel: audit: type=1104 audit(1761962246.387:263): pid=1924 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:57:26.769603 kernel: audit: type=1131 audit(1761962246.389:264): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.90.71:22-147.75.109.163:43476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:57:26.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.90.71:22-147.75.109.163:43476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:57:26.871000 audit[3200]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=3200 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:57:26.871000 audit[3200]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcef03a3e0 a2=0 a3=7ffcef03a3cc items=0 ppid=2820 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:27.028655 kernel: audit: type=1325 audit(1761962246.871:265): table=filter:89 family=2 entries=15 op=nft_register_rule pid=3200 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:57:27.028914 kernel: audit: type=1300 audit(1761962246.871:265): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcef03a3e0 a2=0 a3=7ffcef03a3cc items=0 ppid=2820 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:27.028942 kernel: audit: type=1327 audit(1761962246.871:265): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:57:26.871000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:57:27.031000 audit[3200]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=3200 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:57:27.146066 kernel: audit: type=1325 audit(1761962247.031:266): table=nat:90 family=2 entries=12 op=nft_register_rule pid=3200 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:57:27.146160 kernel: audit: type=1300 audit(1761962247.031:266): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcef03a3e0 a2=0 a3=0 items=0 ppid=2820 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:27.031000 audit[3200]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcef03a3e0 a2=0 a3=0 items=0 ppid=2820 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:27.031000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:57:27.248000 audit[3203]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=3203 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:57:27.248000 audit[3203]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffdcd9d8300 a2=0 a3=7ffdcd9d82ec items=0 ppid=2820 pid=3203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:27.248000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:57:27.265000 audit[3203]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=3203 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:57:27.265000 audit[3203]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdcd9d8300 a2=0 a3=0 items=0 ppid=2820 pid=3203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:27.265000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:57:28.468000 audit[3205]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=3205 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:57:28.468000 audit[3205]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc294d69a0 a2=0 a3=7ffc294d698c items=0 ppid=2820 pid=3205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:28.468000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:57:28.476000 audit[3205]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=3205 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:57:28.476000 audit[3205]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc294d69a0 a2=0 a3=0 items=0 ppid=2820 pid=3205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:28.476000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:57:29.491000 audit[3207]: NETFILTER_CFG table=filter:95 family=2 entries=19 op=nft_register_rule pid=3207 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:57:29.491000 audit[3207]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc6210bb80 a2=0 a3=7ffc6210bb6c items=0 ppid=2820 pid=3207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:29.491000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:57:29.506000 audit[3207]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=3207 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:57:29.506000 audit[3207]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc6210bb80 a2=0 a3=0 items=0 ppid=2820 pid=3207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:29.506000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:57:30.373000 audit[3209]: NETFILTER_CFG table=filter:97 family=2 entries=21 op=nft_register_rule pid=3209 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:57:30.373000 audit[3209]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fffc1082040 a2=0 a3=7fffc108202c items=0 ppid=2820 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:30.373000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:57:30.382000 audit[3209]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=3209 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:57:30.382000 audit[3209]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffc1082040 a2=0 a3=0 items=0 ppid=2820 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:30.382000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:57:30.453616 kubelet[2679]: I1101 01:57:30.453568 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3a4b986a-3b19-4535-8f4c-f48b509ebeb1-typha-certs\") pod \"calico-typha-7c576c8857-jmgtz\" (UID: \"3a4b986a-3b19-4535-8f4c-f48b509ebeb1\") " pod="calico-system/calico-typha-7c576c8857-jmgtz" Nov 1 01:57:30.454339 kubelet[2679]: I1101 01:57:30.453626 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a4b986a-3b19-4535-8f4c-f48b509ebeb1-tigera-ca-bundle\") pod \"calico-typha-7c576c8857-jmgtz\" (UID: \"3a4b986a-3b19-4535-8f4c-f48b509ebeb1\") " pod="calico-system/calico-typha-7c576c8857-jmgtz" Nov 1 01:57:30.454339 kubelet[2679]: I1101 01:57:30.453662 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgnbd\" (UniqueName: \"kubernetes.io/projected/3a4b986a-3b19-4535-8f4c-f48b509ebeb1-kube-api-access-mgnbd\") pod \"calico-typha-7c576c8857-jmgtz\" (UID: \"3a4b986a-3b19-4535-8f4c-f48b509ebeb1\") " pod="calico-system/calico-typha-7c576c8857-jmgtz" Nov 1 01:57:30.655875 kubelet[2679]: I1101 01:57:30.655696 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/639612c4-79f4-4708-b649-aabaefc86dac-flexvol-driver-host\") pod \"calico-node-lxlbg\" (UID: \"639612c4-79f4-4708-b649-aabaefc86dac\") " pod="calico-system/calico-node-lxlbg" Nov 1 01:57:30.655875 kubelet[2679]: I1101 01:57:30.655767 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/639612c4-79f4-4708-b649-aabaefc86dac-lib-modules\") pod \"calico-node-lxlbg\" (UID: \"639612c4-79f4-4708-b649-aabaefc86dac\") " pod="calico-system/calico-node-lxlbg" Nov 1 01:57:30.655875 kubelet[2679]: I1101 01:57:30.655811 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g29z\" (UniqueName: \"kubernetes.io/projected/639612c4-79f4-4708-b649-aabaefc86dac-kube-api-access-2g29z\") pod \"calico-node-lxlbg\" (UID: \"639612c4-79f4-4708-b649-aabaefc86dac\") " pod="calico-system/calico-node-lxlbg" Nov 1 01:57:30.655875 kubelet[2679]: I1101 01:57:30.655849 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/639612c4-79f4-4708-b649-aabaefc86dac-cni-log-dir\") pod \"calico-node-lxlbg\" (UID: \"639612c4-79f4-4708-b649-aabaefc86dac\") " pod="calico-system/calico-node-lxlbg" Nov 1 01:57:30.656304 kubelet[2679]: I1101 01:57:30.655913 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/639612c4-79f4-4708-b649-aabaefc86dac-var-lib-calico\") pod \"calico-node-lxlbg\" (UID: \"639612c4-79f4-4708-b649-aabaefc86dac\") " pod="calico-system/calico-node-lxlbg" Nov 1 01:57:30.656304 kubelet[2679]: I1101 01:57:30.655973 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/639612c4-79f4-4708-b649-aabaefc86dac-node-certs\") pod \"calico-node-lxlbg\" (UID: \"639612c4-79f4-4708-b649-aabaefc86dac\") " pod="calico-system/calico-node-lxlbg" Nov 1 01:57:30.656304 kubelet[2679]: I1101 01:57:30.656037 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/639612c4-79f4-4708-b649-aabaefc86dac-xtables-lock\") pod \"calico-node-lxlbg\" (UID: \"639612c4-79f4-4708-b649-aabaefc86dac\") " pod="calico-system/calico-node-lxlbg" Nov 1 01:57:30.656304 kubelet[2679]: I1101 01:57:30.656099 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/639612c4-79f4-4708-b649-aabaefc86dac-cni-net-dir\") pod \"calico-node-lxlbg\" (UID: \"639612c4-79f4-4708-b649-aabaefc86dac\") " pod="calico-system/calico-node-lxlbg" Nov 1 01:57:30.656304 kubelet[2679]: I1101 01:57:30.656160 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/639612c4-79f4-4708-b649-aabaefc86dac-tigera-ca-bundle\") pod \"calico-node-lxlbg\" (UID: \"639612c4-79f4-4708-b649-aabaefc86dac\") " pod="calico-system/calico-node-lxlbg" Nov 1 01:57:30.656724 kubelet[2679]: I1101 01:57:30.656257 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/639612c4-79f4-4708-b649-aabaefc86dac-cni-bin-dir\") pod \"calico-node-lxlbg\" (UID: \"639612c4-79f4-4708-b649-aabaefc86dac\") " pod="calico-system/calico-node-lxlbg" Nov 1 01:57:30.656724 kubelet[2679]: I1101 01:57:30.656346 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/639612c4-79f4-4708-b649-aabaefc86dac-policysync\") pod \"calico-node-lxlbg\" (UID: \"639612c4-79f4-4708-b649-aabaefc86dac\") " pod="calico-system/calico-node-lxlbg" Nov 1 01:57:30.656724 kubelet[2679]: I1101 01:57:30.656392 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/639612c4-79f4-4708-b649-aabaefc86dac-var-run-calico\") pod \"calico-node-lxlbg\" (UID: \"639612c4-79f4-4708-b649-aabaefc86dac\") " pod="calico-system/calico-node-lxlbg" Nov 1 01:57:30.708504 env[1679]: time="2025-11-01T01:57:30.708361694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c576c8857-jmgtz,Uid:3a4b986a-3b19-4535-8f4c-f48b509ebeb1,Namespace:calico-system,Attempt:0,}" Nov 1 01:57:30.732668 env[1679]: time="2025-11-01T01:57:30.732487194Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:57:30.732668 env[1679]: time="2025-11-01T01:57:30.732586342Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:57:30.732668 env[1679]: time="2025-11-01T01:57:30.732626843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:57:30.733157 env[1679]: time="2025-11-01T01:57:30.732978237Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/44f0454435b25ec3376069ad3eac3d2d94e9602243505f813b28b5647a1f8c76 pid=3219 runtime=io.containerd.runc.v2 Nov 1 01:57:30.761389 kubelet[2679]: E1101 01:57:30.761292 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.761978 kubelet[2679]: W1101 01:57:30.761881 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.762419 kubelet[2679]: E1101 01:57:30.762276 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.767978 kubelet[2679]: E1101 01:57:30.767931 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.767978 kubelet[2679]: W1101 01:57:30.767967 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.768272 kubelet[2679]: E1101 01:57:30.767999 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.777204 kubelet[2679]: E1101 01:57:30.777168 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.777204 kubelet[2679]: W1101 01:57:30.777194 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.777437 kubelet[2679]: E1101 01:57:30.777218 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.804054 kubelet[2679]: E1101 01:57:30.804008 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4r6nm" podUID="66d2f097-1517-44b9-891a-35d40c5f36ae" Nov 1 01:57:30.815502 env[1679]: time="2025-11-01T01:57:30.815469144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c576c8857-jmgtz,Uid:3a4b986a-3b19-4535-8f4c-f48b509ebeb1,Namespace:calico-system,Attempt:0,} returns sandbox id \"44f0454435b25ec3376069ad3eac3d2d94e9602243505f813b28b5647a1f8c76\"" Nov 1 01:57:30.816375 env[1679]: time="2025-11-01T01:57:30.816357294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 1 01:57:30.843502 kubelet[2679]: E1101 01:57:30.843485 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.843580 kubelet[2679]: W1101 01:57:30.843500 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.843580 kubelet[2679]: E1101 01:57:30.843519 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.843653 kubelet[2679]: E1101 01:57:30.843645 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.843701 kubelet[2679]: W1101 01:57:30.843653 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.843701 kubelet[2679]: E1101 01:57:30.843663 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.843785 kubelet[2679]: E1101 01:57:30.843776 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.843785 kubelet[2679]: W1101 01:57:30.843783 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.843868 kubelet[2679]: E1101 01:57:30.843792 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.843935 kubelet[2679]: E1101 01:57:30.843927 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.843935 kubelet[2679]: W1101 01:57:30.843934 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.844008 kubelet[2679]: E1101 01:57:30.843943 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.844056 kubelet[2679]: E1101 01:57:30.844049 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.844056 kubelet[2679]: W1101 01:57:30.844055 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.844133 kubelet[2679]: E1101 01:57:30.844063 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.844171 kubelet[2679]: E1101 01:57:30.844158 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.844171 kubelet[2679]: W1101 01:57:30.844166 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.844237 kubelet[2679]: E1101 01:57:30.844173 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.844273 kubelet[2679]: E1101 01:57:30.844266 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.844315 kubelet[2679]: W1101 01:57:30.844273 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.844315 kubelet[2679]: E1101 01:57:30.844281 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.844392 kubelet[2679]: E1101 01:57:30.844386 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.844437 kubelet[2679]: W1101 01:57:30.844392 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.844437 kubelet[2679]: E1101 01:57:30.844401 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.844510 kubelet[2679]: E1101 01:57:30.844501 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.844510 kubelet[2679]: W1101 01:57:30.844508 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.844583 kubelet[2679]: E1101 01:57:30.844516 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.844625 kubelet[2679]: E1101 01:57:30.844618 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.844667 kubelet[2679]: W1101 01:57:30.844627 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.844667 kubelet[2679]: E1101 01:57:30.844635 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.844733 kubelet[2679]: E1101 01:57:30.844728 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.844779 kubelet[2679]: W1101 01:57:30.844735 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.844779 kubelet[2679]: E1101 01:57:30.844743 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.844851 kubelet[2679]: E1101 01:57:30.844844 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.844892 kubelet[2679]: W1101 01:57:30.844851 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.844892 kubelet[2679]: E1101 01:57:30.844859 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.845070 kubelet[2679]: E1101 01:57:30.845063 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.845070 kubelet[2679]: W1101 01:57:30.845070 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.845123 kubelet[2679]: E1101 01:57:30.845077 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.845210 kubelet[2679]: E1101 01:57:30.845204 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.845235 kubelet[2679]: W1101 01:57:30.845210 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.845235 kubelet[2679]: E1101 01:57:30.845216 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.845306 kubelet[2679]: E1101 01:57:30.845302 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.845339 kubelet[2679]: W1101 01:57:30.845307 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.845339 kubelet[2679]: E1101 01:57:30.845312 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.845403 kubelet[2679]: E1101 01:57:30.845398 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.845430 kubelet[2679]: W1101 01:57:30.845404 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.845430 kubelet[2679]: E1101 01:57:30.845409 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.845492 kubelet[2679]: E1101 01:57:30.845487 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.845517 kubelet[2679]: W1101 01:57:30.845492 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.845517 kubelet[2679]: E1101 01:57:30.845497 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.845573 kubelet[2679]: E1101 01:57:30.845568 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.845597 kubelet[2679]: W1101 01:57:30.845574 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.845597 kubelet[2679]: E1101 01:57:30.845578 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.845655 kubelet[2679]: E1101 01:57:30.845650 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.845681 kubelet[2679]: W1101 01:57:30.845655 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.845681 kubelet[2679]: E1101 01:57:30.845660 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.845737 kubelet[2679]: E1101 01:57:30.845732 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.845759 kubelet[2679]: W1101 01:57:30.845737 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.845759 kubelet[2679]: E1101 01:57:30.845742 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.858796 kubelet[2679]: E1101 01:57:30.858702 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.858796 kubelet[2679]: W1101 01:57:30.858740 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.858796 kubelet[2679]: E1101 01:57:30.858776 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.859274 kubelet[2679]: I1101 01:57:30.858844 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/66d2f097-1517-44b9-891a-35d40c5f36ae-varrun\") pod \"csi-node-driver-4r6nm\" (UID: \"66d2f097-1517-44b9-891a-35d40c5f36ae\") " pod="calico-system/csi-node-driver-4r6nm" Nov 1 01:57:30.859419 kubelet[2679]: E1101 01:57:30.859368 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.859419 kubelet[2679]: W1101 01:57:30.859400 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.859663 kubelet[2679]: E1101 01:57:30.859437 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.859663 kubelet[2679]: I1101 01:57:30.859485 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/66d2f097-1517-44b9-891a-35d40c5f36ae-registration-dir\") pod \"csi-node-driver-4r6nm\" (UID: \"66d2f097-1517-44b9-891a-35d40c5f36ae\") " pod="calico-system/csi-node-driver-4r6nm" Nov 1 01:57:30.860187 kubelet[2679]: E1101 01:57:30.860106 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.860187 kubelet[2679]: W1101 01:57:30.860147 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.860187 kubelet[2679]: E1101 01:57:30.860194 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.860844 kubelet[2679]: E1101 01:57:30.860767 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.860844 kubelet[2679]: W1101 01:57:30.860806 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.860844 kubelet[2679]: E1101 01:57:30.860850 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.861489 kubelet[2679]: E1101 01:57:30.861418 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.861489 kubelet[2679]: W1101 01:57:30.861448 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.861489 kubelet[2679]: E1101 01:57:30.861485 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.861952 kubelet[2679]: I1101 01:57:30.861543 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/66d2f097-1517-44b9-891a-35d40c5f36ae-socket-dir\") pod \"csi-node-driver-4r6nm\" (UID: \"66d2f097-1517-44b9-891a-35d40c5f36ae\") " pod="calico-system/csi-node-driver-4r6nm" Nov 1 01:57:30.862199 kubelet[2679]: E1101 01:57:30.862137 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.862199 kubelet[2679]: W1101 01:57:30.862180 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.862637 kubelet[2679]: E1101 01:57:30.862225 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.862812 kubelet[2679]: E1101 01:57:30.862765 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.862812 kubelet[2679]: W1101 01:57:30.862803 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.863071 kubelet[2679]: E1101 01:57:30.862848 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.863485 kubelet[2679]: E1101 01:57:30.863449 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.863485 kubelet[2679]: W1101 01:57:30.863479 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.863717 kubelet[2679]: E1101 01:57:30.863518 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.863717 kubelet[2679]: I1101 01:57:30.863574 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwdwz\" (UniqueName: \"kubernetes.io/projected/66d2f097-1517-44b9-891a-35d40c5f36ae-kube-api-access-wwdwz\") pod \"csi-node-driver-4r6nm\" (UID: \"66d2f097-1517-44b9-891a-35d40c5f36ae\") " pod="calico-system/csi-node-driver-4r6nm" Nov 1 01:57:30.864101 kubelet[2679]: E1101 01:57:30.864067 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.864252 kubelet[2679]: W1101 01:57:30.864101 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.864252 kubelet[2679]: E1101 01:57:30.864141 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.864686 kubelet[2679]: E1101 01:57:30.864657 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.864832 kubelet[2679]: W1101 01:57:30.864686 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.864832 kubelet[2679]: E1101 01:57:30.864723 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.865244 kubelet[2679]: E1101 01:57:30.865215 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.865244 kubelet[2679]: W1101 01:57:30.865243 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.865559 kubelet[2679]: E1101 01:57:30.865278 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.865559 kubelet[2679]: I1101 01:57:30.865348 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/66d2f097-1517-44b9-891a-35d40c5f36ae-kubelet-dir\") pod \"csi-node-driver-4r6nm\" (UID: \"66d2f097-1517-44b9-891a-35d40c5f36ae\") " pod="calico-system/csi-node-driver-4r6nm" Nov 1 01:57:30.866022 kubelet[2679]: E1101 01:57:30.865943 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.866022 kubelet[2679]: W1101 01:57:30.865989 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.866277 kubelet[2679]: E1101 01:57:30.866035 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.866600 kubelet[2679]: E1101 01:57:30.866543 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.866600 kubelet[2679]: W1101 01:57:30.866580 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.866877 kubelet[2679]: E1101 01:57:30.866623 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.867137 kubelet[2679]: E1101 01:57:30.867099 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.867137 kubelet[2679]: W1101 01:57:30.867132 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.867440 kubelet[2679]: E1101 01:57:30.867167 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.867754 kubelet[2679]: E1101 01:57:30.867697 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.867754 kubelet[2679]: W1101 01:57:30.867735 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.867986 kubelet[2679]: E1101 01:57:30.867781 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.928566 env[1679]: time="2025-11-01T01:57:30.928291086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lxlbg,Uid:639612c4-79f4-4708-b649-aabaefc86dac,Namespace:calico-system,Attempt:0,}" Nov 1 01:57:30.949181 env[1679]: time="2025-11-01T01:57:30.949081908Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:57:30.949181 env[1679]: time="2025-11-01T01:57:30.949151986Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:57:30.949502 env[1679]: time="2025-11-01T01:57:30.949179596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:57:30.949588 env[1679]: time="2025-11-01T01:57:30.949498932Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/abf0c315b02f10ff3169a92fd73e59489fc6c662a9d4c79fbfa01a75539c572d pid=3310 runtime=io.containerd.runc.v2 Nov 1 01:57:30.966122 kubelet[2679]: E1101 01:57:30.966070 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.966122 kubelet[2679]: W1101 01:57:30.966118 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.966464 kubelet[2679]: E1101 01:57:30.966169 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.966694 kubelet[2679]: E1101 01:57:30.966652 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.966694 kubelet[2679]: W1101 01:57:30.966674 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.966884 kubelet[2679]: E1101 01:57:30.966700 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.967112 kubelet[2679]: E1101 01:57:30.967084 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.967194 kubelet[2679]: W1101 01:57:30.967115 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.967194 kubelet[2679]: E1101 01:57:30.967151 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.967532 kubelet[2679]: E1101 01:57:30.967488 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.967532 kubelet[2679]: W1101 01:57:30.967507 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.967532 kubelet[2679]: E1101 01:57:30.967530 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.967862 kubelet[2679]: E1101 01:57:30.967844 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.967954 kubelet[2679]: W1101 01:57:30.967863 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.967954 kubelet[2679]: E1101 01:57:30.967888 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.968265 kubelet[2679]: E1101 01:57:30.968247 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.968386 kubelet[2679]: W1101 01:57:30.968265 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.968386 kubelet[2679]: E1101 01:57:30.968308 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.968605 kubelet[2679]: E1101 01:57:30.968563 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.968605 kubelet[2679]: W1101 01:57:30.968581 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.968769 kubelet[2679]: E1101 01:57:30.968617 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.968917 kubelet[2679]: E1101 01:57:30.968900 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.969010 kubelet[2679]: W1101 01:57:30.968918 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.969010 kubelet[2679]: E1101 01:57:30.968939 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.969226 kubelet[2679]: E1101 01:57:30.969208 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.969303 kubelet[2679]: W1101 01:57:30.969226 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.969303 kubelet[2679]: E1101 01:57:30.969248 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.969542 kubelet[2679]: E1101 01:57:30.969524 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.969542 kubelet[2679]: W1101 01:57:30.969542 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.969702 kubelet[2679]: E1101 01:57:30.969563 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.969920 kubelet[2679]: E1101 01:57:30.969897 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.969920 kubelet[2679]: W1101 01:57:30.969914 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.970106 kubelet[2679]: E1101 01:57:30.969934 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.970370 kubelet[2679]: E1101 01:57:30.970320 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.970479 kubelet[2679]: W1101 01:57:30.970372 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.970479 kubelet[2679]: E1101 01:57:30.970407 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.970829 kubelet[2679]: E1101 01:57:30.970809 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.970914 kubelet[2679]: W1101 01:57:30.970829 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.970914 kubelet[2679]: E1101 01:57:30.970888 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.971184 kubelet[2679]: E1101 01:57:30.971164 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.971279 kubelet[2679]: W1101 01:57:30.971185 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.971279 kubelet[2679]: E1101 01:57:30.971234 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.971531 kubelet[2679]: E1101 01:57:30.971488 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.971531 kubelet[2679]: W1101 01:57:30.971505 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.971531 kubelet[2679]: E1101 01:57:30.971529 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.971852 kubelet[2679]: E1101 01:57:30.971809 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.971852 kubelet[2679]: W1101 01:57:30.971826 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.971852 kubelet[2679]: E1101 01:57:30.971849 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.972130 kubelet[2679]: E1101 01:57:30.972112 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.972214 kubelet[2679]: W1101 01:57:30.972137 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.972214 kubelet[2679]: E1101 01:57:30.972160 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.972547 kubelet[2679]: E1101 01:57:30.972509 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.972547 kubelet[2679]: W1101 01:57:30.972527 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.972725 kubelet[2679]: E1101 01:57:30.972570 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.972820 kubelet[2679]: E1101 01:57:30.972790 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.972820 kubelet[2679]: W1101 01:57:30.972807 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.972967 kubelet[2679]: E1101 01:57:30.972843 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.973086 kubelet[2679]: E1101 01:57:30.973067 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.973086 kubelet[2679]: W1101 01:57:30.973085 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.973232 kubelet[2679]: E1101 01:57:30.973106 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.973394 kubelet[2679]: E1101 01:57:30.973374 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.973394 kubelet[2679]: W1101 01:57:30.973393 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.973551 kubelet[2679]: E1101 01:57:30.973414 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.973856 kubelet[2679]: E1101 01:57:30.973833 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.973856 kubelet[2679]: W1101 01:57:30.973856 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.974045 kubelet[2679]: E1101 01:57:30.973884 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.974248 kubelet[2679]: E1101 01:57:30.974228 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.974362 kubelet[2679]: W1101 01:57:30.974249 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.974362 kubelet[2679]: E1101 01:57:30.974273 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.974722 kubelet[2679]: E1101 01:57:30.974693 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.974814 kubelet[2679]: W1101 01:57:30.974724 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.974814 kubelet[2679]: E1101 01:57:30.974757 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.975147 kubelet[2679]: E1101 01:57:30.975126 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.975147 kubelet[2679]: W1101 01:57:30.975146 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.975317 kubelet[2679]: E1101 01:57:30.975167 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:30.985503 kubelet[2679]: E1101 01:57:30.985472 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:30.985503 kubelet[2679]: W1101 01:57:30.985494 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:30.985712 kubelet[2679]: E1101 01:57:30.985519 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:31.001698 env[1679]: time="2025-11-01T01:57:31.001612842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lxlbg,Uid:639612c4-79f4-4708-b649-aabaefc86dac,Namespace:calico-system,Attempt:0,} returns sandbox id \"abf0c315b02f10ff3169a92fd73e59489fc6c662a9d4c79fbfa01a75539c572d\"" Nov 1 01:57:31.414000 audit[3371]: NETFILTER_CFG table=filter:99 family=2 entries=22 op=nft_register_rule pid=3371 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:57:31.459732 kernel: kauditd_printk_skb: 25 callbacks suppressed Nov 1 01:57:31.459845 kernel: audit: type=1325 audit(1761962251.414:275): table=filter:99 family=2 entries=22 op=nft_register_rule pid=3371 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:57:31.414000 audit[3371]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe57260800 a2=0 a3=7ffe572607ec items=0 ppid=2820 pid=3371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:31.617416 kernel: audit: type=1300 audit(1761962251.414:275): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe57260800 a2=0 a3=7ffe572607ec items=0 ppid=2820 pid=3371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:31.617460 kernel: audit: type=1327 audit(1761962251.414:275): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:57:31.414000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:57:31.677000 audit[3371]: NETFILTER_CFG table=nat:100 family=2 entries=12 op=nft_register_rule pid=3371 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:57:31.677000 audit[3371]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe57260800 a2=0 a3=0 items=0 ppid=2820 pid=3371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:31.835295 kernel: audit: type=1325 audit(1761962251.677:276): table=nat:100 family=2 entries=12 op=nft_register_rule pid=3371 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:57:31.835360 kernel: audit: type=1300 audit(1761962251.677:276): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe57260800 a2=0 a3=0 items=0 ppid=2820 pid=3371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:31.835378 kernel: audit: type=1327 audit(1761962251.677:276): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:57:31.677000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:57:32.511456 kubelet[2679]: E1101 01:57:32.511373 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4r6nm" podUID="66d2f097-1517-44b9-891a-35d40c5f36ae" Nov 1 01:57:32.802134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3557275603.mount: Deactivated successfully. Nov 1 01:57:33.927733 env[1679]: time="2025-11-01T01:57:33.927678617Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:33.928308 env[1679]: time="2025-11-01T01:57:33.928273841Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:33.928993 env[1679]: time="2025-11-01T01:57:33.928959538Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:33.930255 env[1679]: time="2025-11-01T01:57:33.930214477Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:33.930458 env[1679]: time="2025-11-01T01:57:33.930422202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 1 01:57:33.931042 env[1679]: time="2025-11-01T01:57:33.931029930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 1 01:57:33.934675 env[1679]: time="2025-11-01T01:57:33.934630300Z" level=info msg="CreateContainer within sandbox \"44f0454435b25ec3376069ad3eac3d2d94e9602243505f813b28b5647a1f8c76\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 1 01:57:33.939021 env[1679]: time="2025-11-01T01:57:33.938973267Z" level=info msg="CreateContainer within sandbox \"44f0454435b25ec3376069ad3eac3d2d94e9602243505f813b28b5647a1f8c76\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"65522ff95b9e604c58a83477f43a68c643862a39559cc6b330387e3fd8d26445\"" Nov 1 01:57:33.939230 env[1679]: time="2025-11-01T01:57:33.939214983Z" level=info msg="StartContainer for \"65522ff95b9e604c58a83477f43a68c643862a39559cc6b330387e3fd8d26445\"" Nov 1 01:57:33.973053 env[1679]: time="2025-11-01T01:57:33.973025787Z" level=info msg="StartContainer for \"65522ff95b9e604c58a83477f43a68c643862a39559cc6b330387e3fd8d26445\" returns successfully" Nov 1 01:57:34.512311 kubelet[2679]: E1101 01:57:34.512240 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4r6nm" podUID="66d2f097-1517-44b9-891a-35d40c5f36ae" Nov 1 01:57:34.569934 kubelet[2679]: E1101 01:57:34.569871 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:34.569934 kubelet[2679]: W1101 01:57:34.569902 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:34.569934 kubelet[2679]: E1101 01:57:34.569925 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:34.570246 kubelet[2679]: E1101 01:57:34.570229 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:34.570246 kubelet[2679]: W1101 01:57:34.570245 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:34.570380 kubelet[2679]: E1101 01:57:34.570261 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:34.570658 kubelet[2679]: E1101 01:57:34.570606 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:34.570658 kubelet[2679]: W1101 01:57:34.570626 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:34.570658 kubelet[2679]: E1101 01:57:34.570644 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:34.571083 kubelet[2679]: E1101 01:57:34.571029 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:34.571083 kubelet[2679]: W1101 01:57:34.571050 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:34.571083 kubelet[2679]: E1101 01:57:34.571068 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:34.571376 kubelet[2679]: E1101 01:57:34.571359 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:34.571376 kubelet[2679]: W1101 01:57:34.571375 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:34.571493 kubelet[2679]: E1101 01:57:34.571390 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:34.571696 kubelet[2679]: E1101 01:57:34.571677 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:34.571763 kubelet[2679]: W1101 01:57:34.571698 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:34.571763 kubelet[2679]: E1101 01:57:34.571717 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:34.572006 kubelet[2679]: E1101 01:57:34.571992 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:34.572065 kubelet[2679]: W1101 01:57:34.572007 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:34.572065 kubelet[2679]: E1101 01:57:34.572021 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:34.572241 kubelet[2679]: E1101 01:57:34.572227 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:34.572302 kubelet[2679]: W1101 01:57:34.572241 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:34.572302 kubelet[2679]: E1101 01:57:34.572255 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:34.572575 kubelet[2679]: E1101 01:57:34.572560 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:34.572637 kubelet[2679]: W1101 01:57:34.572576 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:34.572637 kubelet[2679]: E1101 01:57:34.572590 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:34.572863 kubelet[2679]: E1101 01:57:34.572825 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:34.572863 kubelet[2679]: W1101 01:57:34.572838 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:34.572863 kubelet[2679]: E1101 01:57:34.572851 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:34.573055 kubelet[2679]: E1101 01:57:34.573042 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:34.573113 kubelet[2679]: W1101 01:57:34.573056 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:34.573113 kubelet[2679]: E1101 01:57:34.573097 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:34.573309 kubelet[2679]: E1101 01:57:34.573295 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:34.573309 kubelet[2679]: W1101 01:57:34.573308 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:34.573449 kubelet[2679]: E1101 01:57:34.573321 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:34.573628 kubelet[2679]: E1101 01:57:34.573592 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:34.573628 kubelet[2679]: W1101 01:57:34.573605 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:34.573628 kubelet[2679]: E1101 01:57:34.573617 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:34.573852 kubelet[2679]: E1101 01:57:34.573813 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:34.573852 kubelet[2679]: W1101 01:57:34.573827 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:34.573852 kubelet[2679]: E1101 01:57:34.573839 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:34.574040 kubelet[2679]: E1101 01:57:34.574026 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:34.574100 kubelet[2679]: W1101 01:57:34.574040 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:34.574100 kubelet[2679]: E1101 01:57:34.574053 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:34.577928 kubelet[2679]: I1101 01:57:34.577845 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7c576c8857-jmgtz" podStartSLOduration=1.463007894 podStartE2EDuration="4.577817634s" podCreationTimestamp="2025-11-01 01:57:30 +0000 UTC" firstStartedPulling="2025-11-01 01:57:30.816168698 +0000 UTC m=+16.443805009" lastFinishedPulling="2025-11-01 01:57:33.930978442 +0000 UTC m=+19.558614749" observedRunningTime="2025-11-01 01:57:34.577232713 +0000 UTC m=+20.204869077" watchObservedRunningTime="2025-11-01 01:57:34.577817634 +0000 UTC m=+20.205453972" Nov 1 01:57:34.595755 kubelet[2679]: E1101 01:57:34.595695 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:34.595755 kubelet[2679]: W1101 01:57:34.595718 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:34.595755 kubelet[2679]: E1101 01:57:34.595741 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:34.596099 kubelet[2679]: E1101 01:57:34.596047 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:34.596099 kubelet[2679]: W1101 01:57:34.596062 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:34.596099 kubelet[2679]: E1101 01:57:34.596082 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:34.596440 kubelet[2679]: E1101 01:57:34.596389 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:34.596440 kubelet[2679]: W1101 01:57:34.596404 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:34.596440 kubelet[2679]: E1101 01:57:34.596421 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:34.596809 kubelet[2679]: E1101 01:57:34.596764 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:34.596809 kubelet[2679]: W1101 01:57:34.596779 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:34.596809 kubelet[2679]: E1101 01:57:34.596798 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:34.597092 kubelet[2679]: E1101 01:57:34.597055 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:34.597092 kubelet[2679]: W1101 01:57:34.597069 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:34.597092 kubelet[2679]: E1101 01:57:34.597085 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:34.597298 kubelet[2679]: E1101 01:57:34.597285 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:34.597372 kubelet[2679]: W1101 01:57:34.597299 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:34.597372 kubelet[2679]: E1101 01:57:34.597344 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:34.597541 kubelet[2679]: E1101 01:57:34.597527 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:34.597612 kubelet[2679]: W1101 01:57:34.597542 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:34.597612 kubelet[2679]: E1101 01:57:34.597568 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:34.597815 kubelet[2679]: E1101 01:57:34.597801 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:34.597873 kubelet[2679]: W1101 01:57:34.597816 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:34.597873 kubelet[2679]: E1101 01:57:34.597841 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:34.598111 kubelet[2679]: E1101 01:57:34.598097 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:34.598222 kubelet[2679]: W1101 01:57:34.598111 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:34.598222 kubelet[2679]: E1101 01:57:34.598128 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:34.598494 kubelet[2679]: E1101 01:57:34.598472 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:34.598558 kubelet[2679]: W1101 01:57:34.598496 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:34.598558 kubelet[2679]: E1101 01:57:34.598524 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:34.598768 kubelet[2679]: E1101 01:57:34.598754 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:34.598837 kubelet[2679]: W1101 01:57:34.598769 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:34.598837 kubelet[2679]: E1101 01:57:34.598786 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:34.599042 kubelet[2679]: E1101 01:57:34.599028 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:34.599100 kubelet[2679]: W1101 01:57:34.599042 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:34.599100 kubelet[2679]: E1101 01:57:34.599061 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:34.599294 kubelet[2679]: E1101 01:57:34.599281 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:34.599384 kubelet[2679]: W1101 01:57:34.599295 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:34.599384 kubelet[2679]: E1101 01:57:34.599319 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:34.599515 kubelet[2679]: E1101 01:57:34.599502 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:34.599574 kubelet[2679]: W1101 01:57:34.599516 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:34.599574 kubelet[2679]: E1101 01:57:34.599529 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:34.599726 kubelet[2679]: E1101 01:57:34.599713 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:34.599789 kubelet[2679]: W1101 01:57:34.599727 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:34.599789 kubelet[2679]: E1101 01:57:34.599744 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:34.599970 kubelet[2679]: E1101 01:57:34.599957 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:34.600026 kubelet[2679]: W1101 01:57:34.599971 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:34.600026 kubelet[2679]: E1101 01:57:34.599986 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:34.600363 kubelet[2679]: E1101 01:57:34.600324 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:34.600485 kubelet[2679]: W1101 01:57:34.600366 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:34.600485 kubelet[2679]: E1101 01:57:34.600389 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:34.600714 kubelet[2679]: E1101 01:57:34.600696 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:34.600780 kubelet[2679]: W1101 01:57:34.600716 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:34.600780 kubelet[2679]: E1101 01:57:34.600736 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:35.524640 env[1679]: time="2025-11-01T01:57:35.524584798Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:35.525186 env[1679]: time="2025-11-01T01:57:35.525142031Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:35.543411 env[1679]: time="2025-11-01T01:57:35.543363092Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:35.545212 env[1679]: time="2025-11-01T01:57:35.545140351Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:35.546224 env[1679]: time="2025-11-01T01:57:35.546153452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 1 01:57:35.548998 env[1679]: time="2025-11-01T01:57:35.548957549Z" level=info msg="CreateContainer within sandbox \"abf0c315b02f10ff3169a92fd73e59489fc6c662a9d4c79fbfa01a75539c572d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 1 01:57:35.558044 kubelet[2679]: I1101 01:57:35.558015 2679 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 01:57:35.558520 env[1679]: time="2025-11-01T01:57:35.558388041Z" level=info msg="CreateContainer within sandbox \"abf0c315b02f10ff3169a92fd73e59489fc6c662a9d4c79fbfa01a75539c572d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"94acc39fea454fbee58077d9f82eba462bea565c510a72a18ab434053154c35e\"" Nov 1 01:57:35.558953 env[1679]: time="2025-11-01T01:57:35.558906542Z" level=info msg="StartContainer for \"94acc39fea454fbee58077d9f82eba462bea565c510a72a18ab434053154c35e\"" Nov 1 01:57:35.580537 kubelet[2679]: E1101 01:57:35.580498 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:35.580537 kubelet[2679]: W1101 01:57:35.580534 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:35.580781 kubelet[2679]: E1101 01:57:35.580571 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:35.580949 kubelet[2679]: E1101 01:57:35.580921 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:35.580949 kubelet[2679]: W1101 01:57:35.580941 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:35.581163 kubelet[2679]: E1101 01:57:35.580960 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:35.581270 kubelet[2679]: E1101 01:57:35.581254 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:35.581356 kubelet[2679]: W1101 01:57:35.581270 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:35.581356 kubelet[2679]: E1101 01:57:35.581286 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:35.581621 kubelet[2679]: E1101 01:57:35.581600 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:35.581691 kubelet[2679]: W1101 01:57:35.581622 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:35.581691 kubelet[2679]: E1101 01:57:35.581641 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:35.581992 kubelet[2679]: E1101 01:57:35.581947 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:35.581992 kubelet[2679]: W1101 01:57:35.581968 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:35.581992 kubelet[2679]: E1101 01:57:35.581987 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:35.582260 kubelet[2679]: E1101 01:57:35.582246 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:35.582324 kubelet[2679]: W1101 01:57:35.582261 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:35.582324 kubelet[2679]: E1101 01:57:35.582276 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:35.582562 kubelet[2679]: E1101 01:57:35.582525 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:35.582562 kubelet[2679]: W1101 01:57:35.582539 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:35.582562 kubelet[2679]: E1101 01:57:35.582553 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:35.582802 kubelet[2679]: E1101 01:57:35.582789 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:35.582866 kubelet[2679]: W1101 01:57:35.582808 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:35.582866 kubelet[2679]: E1101 01:57:35.582823 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:35.583062 kubelet[2679]: E1101 01:57:35.583049 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:35.583121 kubelet[2679]: W1101 01:57:35.583062 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:35.583121 kubelet[2679]: E1101 01:57:35.583076 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:35.583284 kubelet[2679]: E1101 01:57:35.583270 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:35.583284 kubelet[2679]: W1101 01:57:35.583284 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:35.583428 kubelet[2679]: E1101 01:57:35.583297 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:35.583551 kubelet[2679]: E1101 01:57:35.583538 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:35.583611 kubelet[2679]: W1101 01:57:35.583551 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:35.583611 kubelet[2679]: E1101 01:57:35.583563 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:35.583795 kubelet[2679]: E1101 01:57:35.583782 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:35.583857 kubelet[2679]: W1101 01:57:35.583795 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:35.583857 kubelet[2679]: E1101 01:57:35.583808 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:35.584038 kubelet[2679]: E1101 01:57:35.584024 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:35.584097 kubelet[2679]: W1101 01:57:35.584038 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:35.584097 kubelet[2679]: E1101 01:57:35.584050 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:35.584260 kubelet[2679]: E1101 01:57:35.584246 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:35.584322 kubelet[2679]: W1101 01:57:35.584261 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:35.584322 kubelet[2679]: E1101 01:57:35.584273 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:35.584497 kubelet[2679]: E1101 01:57:35.584483 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:35.584558 kubelet[2679]: W1101 01:57:35.584497 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:35.584558 kubelet[2679]: E1101 01:57:35.584510 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:35.604001 kubelet[2679]: E1101 01:57:35.603937 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:35.604001 kubelet[2679]: W1101 01:57:35.603962 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:35.604001 kubelet[2679]: E1101 01:57:35.603984 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:35.604357 kubelet[2679]: E1101 01:57:35.604321 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:35.604435 kubelet[2679]: W1101 01:57:35.604356 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:35.604435 kubelet[2679]: E1101 01:57:35.604388 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:35.604759 kubelet[2679]: E1101 01:57:35.604709 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:35.604759 kubelet[2679]: W1101 01:57:35.604725 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:35.604759 kubelet[2679]: E1101 01:57:35.604744 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:35.605090 kubelet[2679]: E1101 01:57:35.605070 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:35.605179 kubelet[2679]: W1101 01:57:35.605091 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:35.605179 kubelet[2679]: E1101 01:57:35.605116 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:35.605422 kubelet[2679]: E1101 01:57:35.605374 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:35.605422 kubelet[2679]: W1101 01:57:35.605388 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:35.605422 kubelet[2679]: E1101 01:57:35.605404 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:35.605735 kubelet[2679]: E1101 01:57:35.605695 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:35.605735 kubelet[2679]: W1101 01:57:35.605718 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:35.605895 kubelet[2679]: E1101 01:57:35.605743 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:35.606135 kubelet[2679]: E1101 01:57:35.606114 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:35.606196 kubelet[2679]: W1101 01:57:35.606135 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:35.606196 kubelet[2679]: E1101 01:57:35.606176 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:35.606404 kubelet[2679]: E1101 01:57:35.606387 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:35.606404 kubelet[2679]: W1101 01:57:35.606402 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:35.606594 kubelet[2679]: E1101 01:57:35.606443 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:35.606700 kubelet[2679]: E1101 01:57:35.606609 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:35.606700 kubelet[2679]: W1101 01:57:35.606622 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:35.606700 kubelet[2679]: E1101 01:57:35.606641 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:35.606953 kubelet[2679]: E1101 01:57:35.606899 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:35.606953 kubelet[2679]: W1101 01:57:35.606912 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:35.606953 kubelet[2679]: E1101 01:57:35.606929 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:35.607219 kubelet[2679]: E1101 01:57:35.607129 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:35.607219 kubelet[2679]: W1101 01:57:35.607141 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:35.607219 kubelet[2679]: E1101 01:57:35.607157 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:35.607425 kubelet[2679]: E1101 01:57:35.607416 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:35.607492 kubelet[2679]: W1101 01:57:35.607428 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:35.607492 kubelet[2679]: E1101 01:57:35.607446 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:35.607752 kubelet[2679]: E1101 01:57:35.607728 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:35.607752 kubelet[2679]: W1101 01:57:35.607748 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:35.607958 kubelet[2679]: E1101 01:57:35.607777 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:35.608072 kubelet[2679]: E1101 01:57:35.608049 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:35.608174 kubelet[2679]: W1101 01:57:35.608073 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:35.608174 kubelet[2679]: E1101 01:57:35.608095 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:35.608358 kubelet[2679]: E1101 01:57:35.608315 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:35.608358 kubelet[2679]: W1101 01:57:35.608340 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:35.608565 kubelet[2679]: E1101 01:57:35.608360 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:35.608676 kubelet[2679]: E1101 01:57:35.608573 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:35.608676 kubelet[2679]: W1101 01:57:35.608585 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:35.608676 kubelet[2679]: E1101 01:57:35.608599 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:35.608946 kubelet[2679]: E1101 01:57:35.608818 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:35.608946 kubelet[2679]: W1101 01:57:35.608830 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:35.608946 kubelet[2679]: E1101 01:57:35.608844 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:35.609286 kubelet[2679]: E1101 01:57:35.609267 2679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:57:35.609286 kubelet[2679]: W1101 01:57:35.609281 2679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:57:35.609494 kubelet[2679]: E1101 01:57:35.609295 2679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:57:35.620741 env[1679]: time="2025-11-01T01:57:35.620683453Z" level=info msg="StartContainer for \"94acc39fea454fbee58077d9f82eba462bea565c510a72a18ab434053154c35e\" returns successfully" Nov 1 01:57:35.941179 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94acc39fea454fbee58077d9f82eba462bea565c510a72a18ab434053154c35e-rootfs.mount: Deactivated successfully. Nov 1 01:57:36.512352 kubelet[2679]: E1101 01:57:36.512218 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4r6nm" podUID="66d2f097-1517-44b9-891a-35d40c5f36ae" Nov 1 01:57:36.672021 env[1679]: time="2025-11-01T01:57:36.671868075Z" level=info msg="shim disconnected" id=94acc39fea454fbee58077d9f82eba462bea565c510a72a18ab434053154c35e Nov 1 01:57:36.672021 env[1679]: time="2025-11-01T01:57:36.671975706Z" level=warning msg="cleaning up after shim disconnected" id=94acc39fea454fbee58077d9f82eba462bea565c510a72a18ab434053154c35e namespace=k8s.io Nov 1 01:57:36.672021 env[1679]: time="2025-11-01T01:57:36.672007653Z" level=info msg="cleaning up dead shim" Nov 1 01:57:36.689556 env[1679]: time="2025-11-01T01:57:36.689418847Z" level=warning msg="cleanup warnings time=\"2025-11-01T01:57:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3549 runtime=io.containerd.runc.v2\n" Nov 1 01:57:37.571842 env[1679]: time="2025-11-01T01:57:37.571748833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 1 01:57:38.511925 kubelet[2679]: E1101 01:57:38.511799 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4r6nm" podUID="66d2f097-1517-44b9-891a-35d40c5f36ae" Nov 1 01:57:40.512396 kubelet[2679]: E1101 01:57:40.512236 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4r6nm" podUID="66d2f097-1517-44b9-891a-35d40c5f36ae" Nov 1 01:57:41.522063 env[1679]: time="2025-11-01T01:57:41.522009945Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:41.522579 env[1679]: time="2025-11-01T01:57:41.522540039Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:41.523147 env[1679]: time="2025-11-01T01:57:41.523104565Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:41.523871 env[1679]: time="2025-11-01T01:57:41.523830922Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:41.524475 env[1679]: time="2025-11-01T01:57:41.524434075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 1 01:57:41.525787 env[1679]: time="2025-11-01T01:57:41.525773264Z" level=info msg="CreateContainer within sandbox \"abf0c315b02f10ff3169a92fd73e59489fc6c662a9d4c79fbfa01a75539c572d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 1 01:57:41.530599 env[1679]: time="2025-11-01T01:57:41.530582308Z" level=info msg="CreateContainer within sandbox \"abf0c315b02f10ff3169a92fd73e59489fc6c662a9d4c79fbfa01a75539c572d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"63307da7059aef6e64387f2100b069ae41d92f779816e8c61d5ba915de00d1f3\"" Nov 1 01:57:41.530993 env[1679]: time="2025-11-01T01:57:41.530931265Z" level=info msg="StartContainer for \"63307da7059aef6e64387f2100b069ae41d92f779816e8c61d5ba915de00d1f3\"" Nov 1 01:57:41.556052 env[1679]: time="2025-11-01T01:57:41.555980848Z" level=info msg="StartContainer for \"63307da7059aef6e64387f2100b069ae41d92f779816e8c61d5ba915de00d1f3\" returns successfully" Nov 1 01:57:42.390990 env[1679]: time="2025-11-01T01:57:42.390825254Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 01:57:42.436101 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63307da7059aef6e64387f2100b069ae41d92f779816e8c61d5ba915de00d1f3-rootfs.mount: Deactivated successfully. Nov 1 01:57:42.489660 kubelet[2679]: I1101 01:57:42.489606 2679 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 1 01:57:42.518220 env[1679]: time="2025-11-01T01:57:42.518134030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4r6nm,Uid:66d2f097-1517-44b9-891a-35d40c5f36ae,Namespace:calico-system,Attempt:0,}" Nov 1 01:57:42.556598 kubelet[2679]: I1101 01:57:42.556544 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z78dk\" (UniqueName: \"kubernetes.io/projected/183529c2-fd5c-4a2e-b002-133e45559e04-kube-api-access-z78dk\") pod \"coredns-668d6bf9bc-spzcr\" (UID: \"183529c2-fd5c-4a2e-b002-133e45559e04\") " pod="kube-system/coredns-668d6bf9bc-spzcr" Nov 1 01:57:42.556598 kubelet[2679]: I1101 01:57:42.556578 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af91c5b4-018e-48fd-aa87-a2db911b8a67-config-volume\") pod \"coredns-668d6bf9bc-rfczw\" (UID: \"af91c5b4-018e-48fd-aa87-a2db911b8a67\") " pod="kube-system/coredns-668d6bf9bc-rfczw" Nov 1 01:57:42.556756 kubelet[2679]: I1101 01:57:42.556605 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2ll9\" (UniqueName: \"kubernetes.io/projected/af91c5b4-018e-48fd-aa87-a2db911b8a67-kube-api-access-q2ll9\") pod \"coredns-668d6bf9bc-rfczw\" (UID: \"af91c5b4-018e-48fd-aa87-a2db911b8a67\") " pod="kube-system/coredns-668d6bf9bc-rfczw" Nov 1 01:57:42.556756 kubelet[2679]: I1101 01:57:42.556669 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/183529c2-fd5c-4a2e-b002-133e45559e04-config-volume\") pod \"coredns-668d6bf9bc-spzcr\" (UID: \"183529c2-fd5c-4a2e-b002-133e45559e04\") " pod="kube-system/coredns-668d6bf9bc-spzcr" Nov 1 01:57:42.672081 kubelet[2679]: I1101 01:57:42.657673 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/66ab6902-4483-4337-8905-71710abec0d5-calico-apiserver-certs\") pod \"calico-apiserver-fbf49c57b-msb77\" (UID: \"66ab6902-4483-4337-8905-71710abec0d5\") " pod="calico-apiserver/calico-apiserver-fbf49c57b-msb77" Nov 1 01:57:42.672081 kubelet[2679]: I1101 01:57:42.657792 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk5mm\" (UniqueName: \"kubernetes.io/projected/792abee8-a81f-4cb1-9ede-47798a35f0b4-kube-api-access-rk5mm\") pod \"calico-kube-controllers-5ddd8b55c8-kbtkg\" (UID: \"792abee8-a81f-4cb1-9ede-47798a35f0b4\") " pod="calico-system/calico-kube-controllers-5ddd8b55c8-kbtkg" Nov 1 01:57:42.672081 kubelet[2679]: I1101 01:57:42.657847 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f74bb146-b95d-4f2f-8d41-d7fea2ff0e93-whisker-ca-bundle\") pod \"whisker-5d9b556fcb-w5np7\" (UID: \"f74bb146-b95d-4f2f-8d41-d7fea2ff0e93\") " pod="calico-system/whisker-5d9b556fcb-w5np7" Nov 1 01:57:42.672081 kubelet[2679]: I1101 01:57:42.657902 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/381a5ea3-a9a9-42e2-8c3a-9c0b410afe13-calico-apiserver-certs\") pod \"calico-apiserver-fbf49c57b-d5g9p\" (UID: \"381a5ea3-a9a9-42e2-8c3a-9c0b410afe13\") " pod="calico-apiserver/calico-apiserver-fbf49c57b-d5g9p" Nov 1 01:57:42.672081 kubelet[2679]: I1101 01:57:42.658006 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f74bb146-b95d-4f2f-8d41-d7fea2ff0e93-whisker-backend-key-pair\") pod \"whisker-5d9b556fcb-w5np7\" (UID: \"f74bb146-b95d-4f2f-8d41-d7fea2ff0e93\") " pod="calico-system/whisker-5d9b556fcb-w5np7" Nov 1 01:57:42.673219 kubelet[2679]: I1101 01:57:42.658052 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shhx5\" (UniqueName: \"kubernetes.io/projected/381a5ea3-a9a9-42e2-8c3a-9c0b410afe13-kube-api-access-shhx5\") pod \"calico-apiserver-fbf49c57b-d5g9p\" (UID: \"381a5ea3-a9a9-42e2-8c3a-9c0b410afe13\") " pod="calico-apiserver/calico-apiserver-fbf49c57b-d5g9p" Nov 1 01:57:42.673219 kubelet[2679]: I1101 01:57:42.658157 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9792\" (UniqueName: \"kubernetes.io/projected/66ab6902-4483-4337-8905-71710abec0d5-kube-api-access-n9792\") pod \"calico-apiserver-fbf49c57b-msb77\" (UID: \"66ab6902-4483-4337-8905-71710abec0d5\") " pod="calico-apiserver/calico-apiserver-fbf49c57b-msb77" Nov 1 01:57:42.673219 kubelet[2679]: I1101 01:57:42.658202 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bd4bd36-d549-4194-a331-51709a095bb2-config\") pod \"goldmane-666569f655-bm44l\" (UID: \"6bd4bd36-d549-4194-a331-51709a095bb2\") " pod="calico-system/goldmane-666569f655-bm44l" Nov 1 01:57:42.673219 kubelet[2679]: I1101 01:57:42.658248 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45r66\" (UniqueName: \"kubernetes.io/projected/6bd4bd36-d549-4194-a331-51709a095bb2-kube-api-access-45r66\") pod \"goldmane-666569f655-bm44l\" (UID: \"6bd4bd36-d549-4194-a331-51709a095bb2\") " pod="calico-system/goldmane-666569f655-bm44l" Nov 1 01:57:42.673219 kubelet[2679]: I1101 01:57:42.658301 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bd4bd36-d549-4194-a331-51709a095bb2-goldmane-ca-bundle\") pod \"goldmane-666569f655-bm44l\" (UID: \"6bd4bd36-d549-4194-a331-51709a095bb2\") " pod="calico-system/goldmane-666569f655-bm44l" Nov 1 01:57:42.674201 kubelet[2679]: I1101 01:57:42.658396 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/792abee8-a81f-4cb1-9ede-47798a35f0b4-tigera-ca-bundle\") pod \"calico-kube-controllers-5ddd8b55c8-kbtkg\" (UID: \"792abee8-a81f-4cb1-9ede-47798a35f0b4\") " pod="calico-system/calico-kube-controllers-5ddd8b55c8-kbtkg" Nov 1 01:57:42.674201 kubelet[2679]: I1101 01:57:42.658651 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkdgf\" (UniqueName: \"kubernetes.io/projected/f74bb146-b95d-4f2f-8d41-d7fea2ff0e93-kube-api-access-jkdgf\") pod \"whisker-5d9b556fcb-w5np7\" (UID: \"f74bb146-b95d-4f2f-8d41-d7fea2ff0e93\") " pod="calico-system/whisker-5d9b556fcb-w5np7" Nov 1 01:57:42.674201 kubelet[2679]: I1101 01:57:42.658791 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/6bd4bd36-d549-4194-a331-51709a095bb2-goldmane-key-pair\") pod \"goldmane-666569f655-bm44l\" (UID: \"6bd4bd36-d549-4194-a331-51709a095bb2\") " pod="calico-system/goldmane-666569f655-bm44l" Nov 1 01:57:42.751194 env[1679]: time="2025-11-01T01:57:42.751132058Z" level=error msg="Failed to destroy network for sandbox \"3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:57:42.751597 env[1679]: time="2025-11-01T01:57:42.751486570Z" level=error msg="encountered an error cleaning up failed sandbox \"3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:57:42.751597 env[1679]: time="2025-11-01T01:57:42.751534497Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4r6nm,Uid:66d2f097-1517-44b9-891a-35d40c5f36ae,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:57:42.751802 kubelet[2679]: E1101 01:57:42.751765 2679 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:57:42.751862 kubelet[2679]: E1101 01:57:42.751833 2679 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4r6nm" Nov 1 01:57:42.751946 kubelet[2679]: E1101 01:57:42.751859 2679 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4r6nm" Nov 1 01:57:42.751946 kubelet[2679]: E1101 01:57:42.751902 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4r6nm_calico-system(66d2f097-1517-44b9-891a-35d40c5f36ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4r6nm_calico-system(66d2f097-1517-44b9-891a-35d40c5f36ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4r6nm" podUID="66d2f097-1517-44b9-891a-35d40c5f36ae" Nov 1 01:57:42.754380 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695-shm.mount: Deactivated successfully. Nov 1 01:57:42.815556 env[1679]: time="2025-11-01T01:57:42.815483920Z" level=info msg="shim disconnected" id=63307da7059aef6e64387f2100b069ae41d92f779816e8c61d5ba915de00d1f3 Nov 1 01:57:42.815556 env[1679]: time="2025-11-01T01:57:42.815554944Z" level=warning msg="cleaning up after shim disconnected" id=63307da7059aef6e64387f2100b069ae41d92f779816e8c61d5ba915de00d1f3 namespace=k8s.io Nov 1 01:57:42.815914 env[1679]: time="2025-11-01T01:57:42.815573967Z" level=info msg="cleaning up dead shim" Nov 1 01:57:42.830154 env[1679]: time="2025-11-01T01:57:42.830068283Z" level=warning msg="cleanup warnings time=\"2025-11-01T01:57:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3682 runtime=io.containerd.runc.v2\n" Nov 1 01:57:42.842710 env[1679]: time="2025-11-01T01:57:42.842633347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rfczw,Uid:af91c5b4-018e-48fd-aa87-a2db911b8a67,Namespace:kube-system,Attempt:0,}" Nov 1 01:57:42.845993 env[1679]: time="2025-11-01T01:57:42.845904008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-spzcr,Uid:183529c2-fd5c-4a2e-b002-133e45559e04,Namespace:kube-system,Attempt:0,}" Nov 1 01:57:42.848217 env[1679]: time="2025-11-01T01:57:42.848090212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d9b556fcb-w5np7,Uid:f74bb146-b95d-4f2f-8d41-d7fea2ff0e93,Namespace:calico-system,Attempt:0,}" Nov 1 01:57:42.848493 env[1679]: time="2025-11-01T01:57:42.848250119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5ddd8b55c8-kbtkg,Uid:792abee8-a81f-4cb1-9ede-47798a35f0b4,Namespace:calico-system,Attempt:0,}" Nov 1 01:57:42.850215 env[1679]: time="2025-11-01T01:57:42.850084412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fbf49c57b-msb77,Uid:66ab6902-4483-4337-8905-71710abec0d5,Namespace:calico-apiserver,Attempt:0,}" Nov 1 01:57:42.851713 env[1679]: time="2025-11-01T01:57:42.851572168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fbf49c57b-d5g9p,Uid:381a5ea3-a9a9-42e2-8c3a-9c0b410afe13,Namespace:calico-apiserver,Attempt:0,}" Nov 1 01:57:42.852775 env[1679]: time="2025-11-01T01:57:42.852631184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-bm44l,Uid:6bd4bd36-d549-4194-a331-51709a095bb2,Namespace:calico-system,Attempt:0,}" Nov 1 01:57:42.934093 env[1679]: time="2025-11-01T01:57:42.933966372Z" level=error msg="Failed to destroy network for sandbox \"0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:57:42.934283 env[1679]: time="2025-11-01T01:57:42.934258437Z" level=error msg="encountered an error cleaning up failed sandbox \"0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:57:42.934371 env[1679]: time="2025-11-01T01:57:42.934297951Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rfczw,Uid:af91c5b4-018e-48fd-aa87-a2db911b8a67,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:57:42.934497 kubelet[2679]: E1101 01:57:42.934469 2679 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:57:42.934574 kubelet[2679]: E1101 01:57:42.934520 2679 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rfczw" Nov 1 01:57:42.934574 kubelet[2679]: E1101 01:57:42.934536 2679 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rfczw" Nov 1 01:57:42.934661 kubelet[2679]: E1101 01:57:42.934568 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-rfczw_kube-system(af91c5b4-018e-48fd-aa87-a2db911b8a67)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-rfczw_kube-system(af91c5b4-018e-48fd-aa87-a2db911b8a67)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rfczw" podUID="af91c5b4-018e-48fd-aa87-a2db911b8a67" Nov 1 01:57:42.936585 env[1679]: time="2025-11-01T01:57:42.936514356Z" level=error msg="Failed to destroy network for sandbox \"b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:57:42.936846 env[1679]: time="2025-11-01T01:57:42.936798750Z" level=error msg="encountered an error cleaning up failed sandbox \"b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:57:42.936883 env[1679]: time="2025-11-01T01:57:42.936839734Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d9b556fcb-w5np7,Uid:f74bb146-b95d-4f2f-8d41-d7fea2ff0e93,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:57:42.936967 env[1679]: time="2025-11-01T01:57:42.936946369Z" level=error msg="Failed to destroy network for sandbox \"f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:57:42.937052 kubelet[2679]: E1101 01:57:42.937023 2679 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:57:42.937106 kubelet[2679]: E1101 01:57:42.937076 2679 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d9b556fcb-w5np7" Nov 1 01:57:42.937106 kubelet[2679]: E1101 01:57:42.937097 2679 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d9b556fcb-w5np7" Nov 1 01:57:42.937182 kubelet[2679]: E1101 01:57:42.937138 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5d9b556fcb-w5np7_calico-system(f74bb146-b95d-4f2f-8d41-d7fea2ff0e93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5d9b556fcb-w5np7_calico-system(f74bb146-b95d-4f2f-8d41-d7fea2ff0e93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5d9b556fcb-w5np7" podUID="f74bb146-b95d-4f2f-8d41-d7fea2ff0e93" Nov 1 01:57:42.937319 env[1679]: time="2025-11-01T01:57:42.937301789Z" level=error msg="encountered an error cleaning up failed sandbox \"f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:57:42.937371 env[1679]: time="2025-11-01T01:57:42.937332174Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-spzcr,Uid:183529c2-fd5c-4a2e-b002-133e45559e04,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:57:42.937451 kubelet[2679]: E1101 01:57:42.937411 2679 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:57:42.937451 kubelet[2679]: E1101 01:57:42.937436 2679 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-spzcr" Nov 1 01:57:42.937451 kubelet[2679]: E1101 01:57:42.937447 2679 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-spzcr" Nov 1 01:57:42.937522 kubelet[2679]: E1101 01:57:42.937474 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-spzcr_kube-system(183529c2-fd5c-4a2e-b002-133e45559e04)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-spzcr_kube-system(183529c2-fd5c-4a2e-b002-133e45559e04)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-spzcr" podUID="183529c2-fd5c-4a2e-b002-133e45559e04" Nov 1 01:57:42.938086 env[1679]: time="2025-11-01T01:57:42.938057853Z" level=error msg="Failed to destroy network for sandbox \"9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:57:42.938323 env[1679]: time="2025-11-01T01:57:42.938295702Z" level=error msg="Failed to destroy network for sandbox \"5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:57:42.938405 env[1679]: time="2025-11-01T01:57:42.938389730Z" level=error msg="encountered an error cleaning up failed sandbox \"9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:57:42.938432 env[1679]: time="2025-11-01T01:57:42.938415147Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fbf49c57b-d5g9p,Uid:381a5ea3-a9a9-42e2-8c3a-9c0b410afe13,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:57:42.938500 env[1679]: time="2025-11-01T01:57:42.938484407Z" level=error msg="encountered an error cleaning up failed sandbox \"5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:57:42.938527 env[1679]: time="2025-11-01T01:57:42.938508633Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5ddd8b55c8-kbtkg,Uid:792abee8-a81f-4cb1-9ede-47798a35f0b4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:57:42.938564 kubelet[2679]: E1101 01:57:42.938503 2679 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:57:42.938564 kubelet[2679]: E1101 01:57:42.938526 2679 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-fbf49c57b-d5g9p" Nov 1 01:57:42.938564 kubelet[2679]: E1101 01:57:42.938539 2679 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-fbf49c57b-d5g9p" Nov 1 01:57:42.938634 kubelet[2679]: E1101 01:57:42.938563 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-fbf49c57b-d5g9p_calico-apiserver(381a5ea3-a9a9-42e2-8c3a-9c0b410afe13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-fbf49c57b-d5g9p_calico-apiserver(381a5ea3-a9a9-42e2-8c3a-9c0b410afe13)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-d5g9p" podUID="381a5ea3-a9a9-42e2-8c3a-9c0b410afe13" Nov 1 01:57:42.938634 kubelet[2679]: E1101 01:57:42.938579 2679 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:57:42.938634 kubelet[2679]: E1101 01:57:42.938604 2679 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5ddd8b55c8-kbtkg" Nov 1 01:57:42.938715 kubelet[2679]: E1101 01:57:42.938621 2679 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5ddd8b55c8-kbtkg" Nov 1 01:57:42.938715 kubelet[2679]: E1101 01:57:42.938645 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5ddd8b55c8-kbtkg_calico-system(792abee8-a81f-4cb1-9ede-47798a35f0b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5ddd8b55c8-kbtkg_calico-system(792abee8-a81f-4cb1-9ede-47798a35f0b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5ddd8b55c8-kbtkg" podUID="792abee8-a81f-4cb1-9ede-47798a35f0b4" Nov 1 01:57:42.939592 env[1679]: time="2025-11-01T01:57:42.939574199Z" level=error msg="Failed to destroy network for sandbox \"a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:57:42.939755 env[1679]: time="2025-11-01T01:57:42.939716277Z" level=error msg="encountered an error cleaning up failed sandbox \"a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:57:42.939755 env[1679]: time="2025-11-01T01:57:42.939740014Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fbf49c57b-msb77,Uid:66ab6902-4483-4337-8905-71710abec0d5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:57:42.939822 kubelet[2679]: E1101 01:57:42.939801 2679 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:57:42.939847 kubelet[2679]: E1101 01:57:42.939821 2679 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-fbf49c57b-msb77" Nov 1 01:57:42.939847 kubelet[2679]: E1101 01:57:42.939835 2679 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-fbf49c57b-msb77" Nov 1 01:57:42.939889 kubelet[2679]: E1101 01:57:42.939852 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-fbf49c57b-msb77_calico-apiserver(66ab6902-4483-4337-8905-71710abec0d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-fbf49c57b-msb77_calico-apiserver(66ab6902-4483-4337-8905-71710abec0d5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-msb77" podUID="66ab6902-4483-4337-8905-71710abec0d5" Nov 1 01:57:42.940527 env[1679]: time="2025-11-01T01:57:42.940477259Z" level=error msg="Failed to destroy network for sandbox \"0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:57:42.940683 env[1679]: time="2025-11-01T01:57:42.940636630Z" level=error msg="encountered an error cleaning up failed sandbox \"0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:57:42.940683 env[1679]: time="2025-11-01T01:57:42.940658641Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-bm44l,Uid:6bd4bd36-d549-4194-a331-51709a095bb2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:57:42.940759 kubelet[2679]: E1101 01:57:42.940728 2679 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:57:42.940759 kubelet[2679]: E1101 01:57:42.940748 2679 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-bm44l" Nov 1 01:57:42.940809 kubelet[2679]: E1101 01:57:42.940762 2679 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-bm44l" Nov 1 01:57:42.940809 kubelet[2679]: E1101 01:57:42.940781 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-bm44l_calico-system(6bd4bd36-d549-4194-a331-51709a095bb2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-bm44l_calico-system(6bd4bd36-d549-4194-a331-51709a095bb2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-bm44l" podUID="6bd4bd36-d549-4194-a331-51709a095bb2" Nov 1 01:57:43.586883 kubelet[2679]: I1101 01:57:43.586808 2679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76" Nov 1 01:57:43.588281 env[1679]: time="2025-11-01T01:57:43.588196503Z" level=info msg="StopPodSandbox for \"9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76\"" Nov 1 01:57:43.589458 kubelet[2679]: I1101 01:57:43.589384 2679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695" Nov 1 01:57:43.590658 env[1679]: time="2025-11-01T01:57:43.590582910Z" level=info msg="StopPodSandbox for \"3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695\"" Nov 1 01:57:43.591610 kubelet[2679]: I1101 01:57:43.591545 2679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0" Nov 1 01:57:43.592864 env[1679]: time="2025-11-01T01:57:43.592776908Z" level=info msg="StopPodSandbox for \"0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0\"" Nov 1 01:57:43.593771 kubelet[2679]: I1101 01:57:43.593727 2679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71" Nov 1 01:57:43.594948 env[1679]: time="2025-11-01T01:57:43.594847160Z" level=info msg="StopPodSandbox for \"0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71\"" Nov 1 01:57:43.597105 kubelet[2679]: I1101 01:57:43.597085 2679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04" Nov 1 01:57:43.597600 env[1679]: time="2025-11-01T01:57:43.597534194Z" level=info msg="StopPodSandbox for \"b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04\"" Nov 1 01:57:43.598302 kubelet[2679]: I1101 01:57:43.598282 2679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e" Nov 1 01:57:43.598709 env[1679]: time="2025-11-01T01:57:43.598689093Z" level=info msg="StopPodSandbox for \"f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e\"" Nov 1 01:57:43.599991 kubelet[2679]: I1101 01:57:43.599965 2679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e" Nov 1 01:57:43.600141 env[1679]: time="2025-11-01T01:57:43.600054334Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 1 01:57:43.600385 env[1679]: time="2025-11-01T01:57:43.600355693Z" level=info msg="StopPodSandbox for \"5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e\"" Nov 1 01:57:43.600532 kubelet[2679]: I1101 01:57:43.600518 2679 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a" Nov 1 01:57:43.600843 env[1679]: time="2025-11-01T01:57:43.600822085Z" level=info msg="StopPodSandbox for \"a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a\"" Nov 1 01:57:43.612091 env[1679]: time="2025-11-01T01:57:43.612036259Z" level=error msg="StopPodSandbox for \"0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71\" failed" error="failed to destroy network for sandbox \"0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:57:43.612239 kubelet[2679]: E1101 01:57:43.612214 2679 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71" Nov 1 01:57:43.612301 kubelet[2679]: E1101 01:57:43.612267 2679 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71"} Nov 1 01:57:43.612339 kubelet[2679]: E1101 01:57:43.612321 2679 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"af91c5b4-018e-48fd-aa87-a2db911b8a67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:57:43.612402 kubelet[2679]: E1101 01:57:43.612349 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"af91c5b4-018e-48fd-aa87-a2db911b8a67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rfczw" podUID="af91c5b4-018e-48fd-aa87-a2db911b8a67" Nov 1 01:57:43.612451 env[1679]: time="2025-11-01T01:57:43.612415689Z" level=error msg="StopPodSandbox for \"9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76\" failed" error="failed to destroy network for sandbox \"9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:57:43.612543 kubelet[2679]: E1101 01:57:43.612522 2679 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76" Nov 1 01:57:43.612597 kubelet[2679]: E1101 01:57:43.612550 2679 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76"} Nov 1 01:57:43.612597 kubelet[2679]: E1101 01:57:43.612577 2679 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"381a5ea3-a9a9-42e2-8c3a-9c0b410afe13\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:57:43.612705 kubelet[2679]: E1101 01:57:43.612598 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"381a5ea3-a9a9-42e2-8c3a-9c0b410afe13\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-d5g9p" podUID="381a5ea3-a9a9-42e2-8c3a-9c0b410afe13" Nov 1 01:57:43.613070 env[1679]: time="2025-11-01T01:57:43.613011949Z" level=error msg="StopPodSandbox for \"0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0\" failed" error="failed to destroy network for sandbox \"0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:57:43.613163 kubelet[2679]: E1101 01:57:43.613144 2679 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0" Nov 1 01:57:43.613204 kubelet[2679]: E1101 01:57:43.613169 2679 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0"} Nov 1 01:57:43.613204 kubelet[2679]: E1101 01:57:43.613195 2679 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6bd4bd36-d549-4194-a331-51709a095bb2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:57:43.613290 kubelet[2679]: E1101 01:57:43.613213 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6bd4bd36-d549-4194-a331-51709a095bb2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-bm44l" podUID="6bd4bd36-d549-4194-a331-51709a095bb2" Nov 1 01:57:43.613424 env[1679]: time="2025-11-01T01:57:43.613403246Z" level=error msg="StopPodSandbox for \"3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695\" failed" error="failed to destroy network for sandbox \"3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:57:43.613496 kubelet[2679]: E1101 01:57:43.613480 2679 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695" Nov 1 01:57:43.613533 kubelet[2679]: E1101 01:57:43.613512 2679 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695"} Nov 1 01:57:43.613557 kubelet[2679]: E1101 01:57:43.613534 2679 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"66d2f097-1517-44b9-891a-35d40c5f36ae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:57:43.613595 kubelet[2679]: E1101 01:57:43.613552 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"66d2f097-1517-44b9-891a-35d40c5f36ae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4r6nm" podUID="66d2f097-1517-44b9-891a-35d40c5f36ae" Nov 1 01:57:43.614463 env[1679]: time="2025-11-01T01:57:43.614438011Z" level=error msg="StopPodSandbox for \"b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04\" failed" error="failed to destroy network for sandbox \"b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:57:43.614559 kubelet[2679]: E1101 01:57:43.614544 2679 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04" Nov 1 01:57:43.614605 kubelet[2679]: E1101 01:57:43.614565 2679 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04"} Nov 1 01:57:43.614605 kubelet[2679]: E1101 01:57:43.614584 2679 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f74bb146-b95d-4f2f-8d41-d7fea2ff0e93\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:57:43.614605 kubelet[2679]: E1101 01:57:43.614600 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f74bb146-b95d-4f2f-8d41-d7fea2ff0e93\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5d9b556fcb-w5np7" podUID="f74bb146-b95d-4f2f-8d41-d7fea2ff0e93" Nov 1 01:57:43.615053 env[1679]: time="2025-11-01T01:57:43.615031730Z" level=error msg="StopPodSandbox for \"f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e\" failed" error="failed to destroy network for sandbox \"f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:57:43.615111 kubelet[2679]: E1101 01:57:43.615098 2679 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e" Nov 1 01:57:43.615152 kubelet[2679]: E1101 01:57:43.615115 2679 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e"} Nov 1 01:57:43.615152 kubelet[2679]: E1101 01:57:43.615130 2679 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"183529c2-fd5c-4a2e-b002-133e45559e04\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:57:43.615152 kubelet[2679]: E1101 01:57:43.615142 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"183529c2-fd5c-4a2e-b002-133e45559e04\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-spzcr" podUID="183529c2-fd5c-4a2e-b002-133e45559e04" Nov 1 01:57:43.616152 env[1679]: time="2025-11-01T01:57:43.616132521Z" level=error msg="StopPodSandbox for \"5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e\" failed" error="failed to destroy network for sandbox \"5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:57:43.616213 kubelet[2679]: E1101 01:57:43.616201 2679 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e" Nov 1 01:57:43.616246 kubelet[2679]: E1101 01:57:43.616218 2679 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e"} Nov 1 01:57:43.616246 kubelet[2679]: E1101 01:57:43.616235 2679 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"792abee8-a81f-4cb1-9ede-47798a35f0b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:57:43.616304 kubelet[2679]: E1101 01:57:43.616246 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"792abee8-a81f-4cb1-9ede-47798a35f0b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5ddd8b55c8-kbtkg" podUID="792abee8-a81f-4cb1-9ede-47798a35f0b4" Nov 1 01:57:43.616347 env[1679]: time="2025-11-01T01:57:43.616265385Z" level=error msg="StopPodSandbox for \"a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a\" failed" error="failed to destroy network for sandbox \"a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:57:43.616373 kubelet[2679]: E1101 01:57:43.616328 2679 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a" Nov 1 01:57:43.616373 kubelet[2679]: E1101 01:57:43.616341 2679 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a"} Nov 1 01:57:43.616373 kubelet[2679]: E1101 01:57:43.616352 2679 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"66ab6902-4483-4337-8905-71710abec0d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:57:43.616373 kubelet[2679]: E1101 01:57:43.616362 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"66ab6902-4483-4337-8905-71710abec0d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-msb77" podUID="66ab6902-4483-4337-8905-71710abec0d5" Nov 1 01:57:47.790827 kubelet[2679]: I1101 01:57:47.790809 2679 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 01:57:47.804000 audit[4161]: NETFILTER_CFG table=filter:101 family=2 entries=21 op=nft_register_rule pid=4161 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:57:47.804000 audit[4161]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc396de500 a2=0 a3=7ffc396de4ec items=0 ppid=2820 pid=4161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:47.866384 kernel: audit: type=1325 audit(1761962267.804:277): table=filter:101 family=2 entries=21 op=nft_register_rule pid=4161 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:57:47.866429 kernel: audit: type=1300 audit(1761962267.804:277): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc396de500 a2=0 a3=7ffc396de4ec items=0 ppid=2820 pid=4161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:47.804000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:57:48.023392 kernel: audit: type=1327 audit(1761962267.804:277): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:57:48.025000 audit[4161]: NETFILTER_CFG table=nat:102 family=2 entries=19 op=nft_register_chain pid=4161 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:57:48.025000 audit[4161]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc396de500 a2=0 a3=7ffc396de4ec items=0 ppid=2820 pid=4161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:48.186056 kernel: audit: type=1325 audit(1761962268.025:278): table=nat:102 family=2 entries=19 op=nft_register_chain pid=4161 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:57:48.186121 kernel: audit: type=1300 audit(1761962268.025:278): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc396de500 a2=0 a3=7ffc396de4ec items=0 ppid=2820 pid=4161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:48.186137 kernel: audit: type=1327 audit(1761962268.025:278): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:57:48.025000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:57:50.164168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2123851730.mount: Deactivated successfully. Nov 1 01:57:50.180293 env[1679]: time="2025-11-01T01:57:50.180273181Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:50.180824 env[1679]: time="2025-11-01T01:57:50.180779620Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:50.181453 env[1679]: time="2025-11-01T01:57:50.181398361Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:50.182850 env[1679]: time="2025-11-01T01:57:50.182710855Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:57:50.183123 env[1679]: time="2025-11-01T01:57:50.183109830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 1 01:57:50.187158 env[1679]: time="2025-11-01T01:57:50.187133586Z" level=info msg="CreateContainer within sandbox \"abf0c315b02f10ff3169a92fd73e59489fc6c662a9d4c79fbfa01a75539c572d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 1 01:57:50.192318 env[1679]: time="2025-11-01T01:57:50.192298792Z" level=info msg="CreateContainer within sandbox \"abf0c315b02f10ff3169a92fd73e59489fc6c662a9d4c79fbfa01a75539c572d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e3e35db3c6f0187f634a7f7a8f947d2c22c9b434e2ef20a6db6d5225e7411cbc\"" Nov 1 01:57:50.192674 env[1679]: time="2025-11-01T01:57:50.192657542Z" level=info msg="StartContainer for \"e3e35db3c6f0187f634a7f7a8f947d2c22c9b434e2ef20a6db6d5225e7411cbc\"" Nov 1 01:57:50.217101 env[1679]: time="2025-11-01T01:57:50.217076241Z" level=info msg="StartContainer for \"e3e35db3c6f0187f634a7f7a8f947d2c22c9b434e2ef20a6db6d5225e7411cbc\" returns successfully" Nov 1 01:57:50.329565 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 1 01:57:50.329633 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 1 01:57:50.385411 env[1679]: time="2025-11-01T01:57:50.385375667Z" level=info msg="StopPodSandbox for \"b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04\"" Nov 1 01:57:50.480960 env[1679]: 2025-11-01 01:57:50.418 [INFO][4239] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04" Nov 1 01:57:50.480960 env[1679]: 2025-11-01 01:57:50.419 [INFO][4239] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04" iface="eth0" netns="/var/run/netns/cni-eda97c7f-5ed1-5d96-1ed8-12fe4ae343bf" Nov 1 01:57:50.480960 env[1679]: 2025-11-01 01:57:50.419 [INFO][4239] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04" iface="eth0" netns="/var/run/netns/cni-eda97c7f-5ed1-5d96-1ed8-12fe4ae343bf" Nov 1 01:57:50.480960 env[1679]: 2025-11-01 01:57:50.420 [INFO][4239] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04" iface="eth0" netns="/var/run/netns/cni-eda97c7f-5ed1-5d96-1ed8-12fe4ae343bf" Nov 1 01:57:50.480960 env[1679]: 2025-11-01 01:57:50.420 [INFO][4239] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04" Nov 1 01:57:50.480960 env[1679]: 2025-11-01 01:57:50.420 [INFO][4239] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04" Nov 1 01:57:50.480960 env[1679]: 2025-11-01 01:57:50.463 [INFO][4262] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04" HandleID="k8s-pod-network.b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04" Workload="ci--3510.3.8--n--0f05b56927-k8s-whisker--5d9b556fcb--w5np7-eth0" Nov 1 01:57:50.480960 env[1679]: 2025-11-01 01:57:50.464 [INFO][4262] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:57:50.480960 env[1679]: 2025-11-01 01:57:50.464 [INFO][4262] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:57:50.480960 env[1679]: 2025-11-01 01:57:50.473 [WARNING][4262] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04" HandleID="k8s-pod-network.b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04" Workload="ci--3510.3.8--n--0f05b56927-k8s-whisker--5d9b556fcb--w5np7-eth0" Nov 1 01:57:50.480960 env[1679]: 2025-11-01 01:57:50.473 [INFO][4262] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04" HandleID="k8s-pod-network.b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04" Workload="ci--3510.3.8--n--0f05b56927-k8s-whisker--5d9b556fcb--w5np7-eth0" Nov 1 01:57:50.480960 env[1679]: 2025-11-01 01:57:50.474 [INFO][4262] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:57:50.480960 env[1679]: 2025-11-01 01:57:50.478 [INFO][4239] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04" Nov 1 01:57:50.481862 env[1679]: time="2025-11-01T01:57:50.481071628Z" level=info msg="TearDown network for sandbox \"b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04\" successfully" Nov 1 01:57:50.481862 env[1679]: time="2025-11-01T01:57:50.481125654Z" level=info msg="StopPodSandbox for \"b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04\" returns successfully" Nov 1 01:57:50.516434 kubelet[2679]: I1101 01:57:50.515028 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f74bb146-b95d-4f2f-8d41-d7fea2ff0e93-whisker-backend-key-pair\") pod \"f74bb146-b95d-4f2f-8d41-d7fea2ff0e93\" (UID: \"f74bb146-b95d-4f2f-8d41-d7fea2ff0e93\") " Nov 1 01:57:50.516434 kubelet[2679]: I1101 01:57:50.515188 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f74bb146-b95d-4f2f-8d41-d7fea2ff0e93-whisker-ca-bundle\") pod \"f74bb146-b95d-4f2f-8d41-d7fea2ff0e93\" (UID: \"f74bb146-b95d-4f2f-8d41-d7fea2ff0e93\") " Nov 1 01:57:50.516434 kubelet[2679]: I1101 01:57:50.515359 2679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkdgf\" (UniqueName: \"kubernetes.io/projected/f74bb146-b95d-4f2f-8d41-d7fea2ff0e93-kube-api-access-jkdgf\") pod \"f74bb146-b95d-4f2f-8d41-d7fea2ff0e93\" (UID: \"f74bb146-b95d-4f2f-8d41-d7fea2ff0e93\") " Nov 1 01:57:50.516434 kubelet[2679]: I1101 01:57:50.516308 2679 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f74bb146-b95d-4f2f-8d41-d7fea2ff0e93-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f74bb146-b95d-4f2f-8d41-d7fea2ff0e93" (UID: "f74bb146-b95d-4f2f-8d41-d7fea2ff0e93"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 01:57:50.522455 kubelet[2679]: I1101 01:57:50.522359 2679 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f74bb146-b95d-4f2f-8d41-d7fea2ff0e93-kube-api-access-jkdgf" (OuterVolumeSpecName: "kube-api-access-jkdgf") pod "f74bb146-b95d-4f2f-8d41-d7fea2ff0e93" (UID: "f74bb146-b95d-4f2f-8d41-d7fea2ff0e93"). InnerVolumeSpecName "kube-api-access-jkdgf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 01:57:50.522455 kubelet[2679]: I1101 01:57:50.522405 2679 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f74bb146-b95d-4f2f-8d41-d7fea2ff0e93-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f74bb146-b95d-4f2f-8d41-d7fea2ff0e93" (UID: "f74bb146-b95d-4f2f-8d41-d7fea2ff0e93"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 01:57:50.616902 kubelet[2679]: I1101 01:57:50.616792 2679 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f74bb146-b95d-4f2f-8d41-d7fea2ff0e93-whisker-backend-key-pair\") on node \"ci-3510.3.8-n-0f05b56927\" DevicePath \"\"" Nov 1 01:57:50.616902 kubelet[2679]: I1101 01:57:50.616861 2679 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f74bb146-b95d-4f2f-8d41-d7fea2ff0e93-whisker-ca-bundle\") on node \"ci-3510.3.8-n-0f05b56927\" DevicePath \"\"" Nov 1 01:57:50.616902 kubelet[2679]: I1101 01:57:50.616890 2679 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jkdgf\" (UniqueName: \"kubernetes.io/projected/f74bb146-b95d-4f2f-8d41-d7fea2ff0e93-kube-api-access-jkdgf\") on node \"ci-3510.3.8-n-0f05b56927\" DevicePath \"\"" Nov 1 01:57:50.662299 kubelet[2679]: I1101 01:57:50.662168 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-lxlbg" podStartSLOduration=1.481395617 podStartE2EDuration="20.662138354s" podCreationTimestamp="2025-11-01 01:57:30 +0000 UTC" firstStartedPulling="2025-11-01 01:57:31.002819746 +0000 UTC m=+16.630456082" lastFinishedPulling="2025-11-01 01:57:50.183562508 +0000 UTC m=+35.811198819" observedRunningTime="2025-11-01 01:57:50.661888794 +0000 UTC m=+36.289525161" watchObservedRunningTime="2025-11-01 01:57:50.662138354 +0000 UTC m=+36.289774707" Nov 1 01:57:50.818937 kubelet[2679]: I1101 01:57:50.818810 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aabe0a9d-10db-49d2-a1d8-2a8011591b5d-whisker-ca-bundle\") pod \"whisker-5b79df9786-ds9vj\" (UID: \"aabe0a9d-10db-49d2-a1d8-2a8011591b5d\") " pod="calico-system/whisker-5b79df9786-ds9vj" Nov 1 01:57:50.819252 kubelet[2679]: I1101 01:57:50.819010 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgwpl\" (UniqueName: \"kubernetes.io/projected/aabe0a9d-10db-49d2-a1d8-2a8011591b5d-kube-api-access-pgwpl\") pod \"whisker-5b79df9786-ds9vj\" (UID: \"aabe0a9d-10db-49d2-a1d8-2a8011591b5d\") " pod="calico-system/whisker-5b79df9786-ds9vj" Nov 1 01:57:50.819252 kubelet[2679]: I1101 01:57:50.819109 2679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/aabe0a9d-10db-49d2-a1d8-2a8011591b5d-whisker-backend-key-pair\") pod \"whisker-5b79df9786-ds9vj\" (UID: \"aabe0a9d-10db-49d2-a1d8-2a8011591b5d\") " pod="calico-system/whisker-5b79df9786-ds9vj" Nov 1 01:57:51.028520 env[1679]: time="2025-11-01T01:57:51.028379023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b79df9786-ds9vj,Uid:aabe0a9d-10db-49d2-a1d8-2a8011591b5d,Namespace:calico-system,Attempt:0,}" Nov 1 01:57:51.108159 systemd-networkd[1345]: califd1695f732b: Link UP Nov 1 01:57:51.164714 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 01:57:51.164756 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): califd1695f732b: link becomes ready Nov 1 01:57:51.164786 systemd-networkd[1345]: califd1695f732b: Gained carrier Nov 1 01:57:51.166429 systemd[1]: run-netns-cni\x2deda97c7f\x2d5ed1\x2d5d96\x2d1ed8\x2d12fe4ae343bf.mount: Deactivated successfully. Nov 1 01:57:51.166503 systemd[1]: var-lib-kubelet-pods-f74bb146\x2db95d\x2d4f2f\x2d8d41\x2dd7fea2ff0e93-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djkdgf.mount: Deactivated successfully. Nov 1 01:57:51.166559 systemd[1]: var-lib-kubelet-pods-f74bb146\x2db95d\x2d4f2f\x2d8d41\x2dd7fea2ff0e93-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 1 01:57:51.171589 env[1679]: 2025-11-01 01:57:51.056 [INFO][4293] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 01:57:51.171589 env[1679]: 2025-11-01 01:57:51.063 [INFO][4293] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--0f05b56927-k8s-whisker--5b79df9786--ds9vj-eth0 whisker-5b79df9786- calico-system aabe0a9d-10db-49d2-a1d8-2a8011591b5d 872 0 2025-11-01 01:57:50 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5b79df9786 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-3510.3.8-n-0f05b56927 whisker-5b79df9786-ds9vj eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] califd1695f732b [] [] }} ContainerID="1e962095b4d3eaf1aa5f31920aa83d741fe1c99de4fa64303c62867439d004db" Namespace="calico-system" Pod="whisker-5b79df9786-ds9vj" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-whisker--5b79df9786--ds9vj-" Nov 1 01:57:51.171589 env[1679]: 2025-11-01 01:57:51.063 [INFO][4293] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1e962095b4d3eaf1aa5f31920aa83d741fe1c99de4fa64303c62867439d004db" Namespace="calico-system" Pod="whisker-5b79df9786-ds9vj" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-whisker--5b79df9786--ds9vj-eth0" Nov 1 01:57:51.171589 env[1679]: 2025-11-01 01:57:51.076 [INFO][4312] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1e962095b4d3eaf1aa5f31920aa83d741fe1c99de4fa64303c62867439d004db" HandleID="k8s-pod-network.1e962095b4d3eaf1aa5f31920aa83d741fe1c99de4fa64303c62867439d004db" Workload="ci--3510.3.8--n--0f05b56927-k8s-whisker--5b79df9786--ds9vj-eth0" Nov 1 01:57:51.171589 env[1679]: 2025-11-01 01:57:51.076 [INFO][4312] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1e962095b4d3eaf1aa5f31920aa83d741fe1c99de4fa64303c62867439d004db" HandleID="k8s-pod-network.1e962095b4d3eaf1aa5f31920aa83d741fe1c99de4fa64303c62867439d004db" Workload="ci--3510.3.8--n--0f05b56927-k8s-whisker--5b79df9786--ds9vj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e760), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-0f05b56927", "pod":"whisker-5b79df9786-ds9vj", "timestamp":"2025-11-01 01:57:51.076784641 +0000 UTC"}, Hostname:"ci-3510.3.8-n-0f05b56927", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:57:51.171589 env[1679]: 2025-11-01 01:57:51.076 [INFO][4312] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:57:51.171589 env[1679]: 2025-11-01 01:57:51.076 [INFO][4312] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:57:51.171589 env[1679]: 2025-11-01 01:57:51.076 [INFO][4312] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-0f05b56927' Nov 1 01:57:51.171589 env[1679]: 2025-11-01 01:57:51.081 [INFO][4312] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1e962095b4d3eaf1aa5f31920aa83d741fe1c99de4fa64303c62867439d004db" host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:51.171589 env[1679]: 2025-11-01 01:57:51.085 [INFO][4312] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:51.171589 env[1679]: 2025-11-01 01:57:51.088 [INFO][4312] ipam/ipam.go 511: Trying affinity for 192.168.3.192/26 host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:51.171589 env[1679]: 2025-11-01 01:57:51.089 [INFO][4312] ipam/ipam.go 158: Attempting to load block cidr=192.168.3.192/26 host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:51.171589 env[1679]: 2025-11-01 01:57:51.091 [INFO][4312] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.3.192/26 host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:51.171589 env[1679]: 2025-11-01 01:57:51.091 [INFO][4312] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.3.192/26 handle="k8s-pod-network.1e962095b4d3eaf1aa5f31920aa83d741fe1c99de4fa64303c62867439d004db" host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:51.171589 env[1679]: 2025-11-01 01:57:51.092 [INFO][4312] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1e962095b4d3eaf1aa5f31920aa83d741fe1c99de4fa64303c62867439d004db Nov 1 01:57:51.171589 env[1679]: 2025-11-01 01:57:51.095 [INFO][4312] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.3.192/26 handle="k8s-pod-network.1e962095b4d3eaf1aa5f31920aa83d741fe1c99de4fa64303c62867439d004db" host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:51.171589 env[1679]: 2025-11-01 01:57:51.099 [INFO][4312] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.3.193/26] block=192.168.3.192/26 handle="k8s-pod-network.1e962095b4d3eaf1aa5f31920aa83d741fe1c99de4fa64303c62867439d004db" host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:51.171589 env[1679]: 2025-11-01 01:57:51.099 [INFO][4312] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.3.193/26] handle="k8s-pod-network.1e962095b4d3eaf1aa5f31920aa83d741fe1c99de4fa64303c62867439d004db" host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:51.171589 env[1679]: 2025-11-01 01:57:51.099 [INFO][4312] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:57:51.171589 env[1679]: 2025-11-01 01:57:51.099 [INFO][4312] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.3.193/26] IPv6=[] ContainerID="1e962095b4d3eaf1aa5f31920aa83d741fe1c99de4fa64303c62867439d004db" HandleID="k8s-pod-network.1e962095b4d3eaf1aa5f31920aa83d741fe1c99de4fa64303c62867439d004db" Workload="ci--3510.3.8--n--0f05b56927-k8s-whisker--5b79df9786--ds9vj-eth0" Nov 1 01:57:51.172026 env[1679]: 2025-11-01 01:57:51.100 [INFO][4293] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1e962095b4d3eaf1aa5f31920aa83d741fe1c99de4fa64303c62867439d004db" Namespace="calico-system" Pod="whisker-5b79df9786-ds9vj" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-whisker--5b79df9786--ds9vj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0f05b56927-k8s-whisker--5b79df9786--ds9vj-eth0", GenerateName:"whisker-5b79df9786-", Namespace:"calico-system", SelfLink:"", UID:"aabe0a9d-10db-49d2-a1d8-2a8011591b5d", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 57, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b79df9786", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0f05b56927", ContainerID:"", Pod:"whisker-5b79df9786-ds9vj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.3.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califd1695f732b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:57:51.172026 env[1679]: 2025-11-01 01:57:51.101 [INFO][4293] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.3.193/32] ContainerID="1e962095b4d3eaf1aa5f31920aa83d741fe1c99de4fa64303c62867439d004db" Namespace="calico-system" Pod="whisker-5b79df9786-ds9vj" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-whisker--5b79df9786--ds9vj-eth0" Nov 1 01:57:51.172026 env[1679]: 2025-11-01 01:57:51.101 [INFO][4293] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califd1695f732b ContainerID="1e962095b4d3eaf1aa5f31920aa83d741fe1c99de4fa64303c62867439d004db" Namespace="calico-system" Pod="whisker-5b79df9786-ds9vj" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-whisker--5b79df9786--ds9vj-eth0" Nov 1 01:57:51.172026 env[1679]: 2025-11-01 01:57:51.164 [INFO][4293] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1e962095b4d3eaf1aa5f31920aa83d741fe1c99de4fa64303c62867439d004db" Namespace="calico-system" Pod="whisker-5b79df9786-ds9vj" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-whisker--5b79df9786--ds9vj-eth0" Nov 1 01:57:51.172026 env[1679]: 2025-11-01 01:57:51.165 [INFO][4293] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1e962095b4d3eaf1aa5f31920aa83d741fe1c99de4fa64303c62867439d004db" Namespace="calico-system" Pod="whisker-5b79df9786-ds9vj" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-whisker--5b79df9786--ds9vj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0f05b56927-k8s-whisker--5b79df9786--ds9vj-eth0", GenerateName:"whisker-5b79df9786-", Namespace:"calico-system", SelfLink:"", UID:"aabe0a9d-10db-49d2-a1d8-2a8011591b5d", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 57, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b79df9786", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0f05b56927", ContainerID:"1e962095b4d3eaf1aa5f31920aa83d741fe1c99de4fa64303c62867439d004db", Pod:"whisker-5b79df9786-ds9vj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.3.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califd1695f732b", MAC:"46:ff:08:c7:e2:81", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:57:51.172026 env[1679]: 2025-11-01 01:57:51.170 [INFO][4293] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1e962095b4d3eaf1aa5f31920aa83d741fe1c99de4fa64303c62867439d004db" Namespace="calico-system" Pod="whisker-5b79df9786-ds9vj" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-whisker--5b79df9786--ds9vj-eth0" Nov 1 01:57:51.176099 env[1679]: time="2025-11-01T01:57:51.176069587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:57:51.176099 env[1679]: time="2025-11-01T01:57:51.176090179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:57:51.176099 env[1679]: time="2025-11-01T01:57:51.176097075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:57:51.176188 env[1679]: time="2025-11-01T01:57:51.176162830Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1e962095b4d3eaf1aa5f31920aa83d741fe1c99de4fa64303c62867439d004db pid=4340 runtime=io.containerd.runc.v2 Nov 1 01:57:51.204174 env[1679]: time="2025-11-01T01:57:51.204152164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b79df9786-ds9vj,Uid:aabe0a9d-10db-49d2-a1d8-2a8011591b5d,Namespace:calico-system,Attempt:0,} returns sandbox id \"1e962095b4d3eaf1aa5f31920aa83d741fe1c99de4fa64303c62867439d004db\"" Nov 1 01:57:51.204871 env[1679]: time="2025-11-01T01:57:51.204860011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 01:57:51.585000 audit[4424]: AVC avc: denied { write } for pid=4424 comm="tee" name="fd" dev="proc" ino=27453 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 01:57:51.610616 env[1679]: time="2025-11-01T01:57:51.610425230Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:57:51.615344 env[1679]: time="2025-11-01T01:57:51.615280656Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 01:57:51.615610 kubelet[2679]: E1101 01:57:51.615581 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:57:51.615879 kubelet[2679]: E1101 01:57:51.615624 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:57:51.615919 kubelet[2679]: E1101 01:57:51.615747 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b3852adad46a4293a11e539c5e005d65,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pgwpl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b79df9786-ds9vj_calico-system(aabe0a9d-10db-49d2-a1d8-2a8011591b5d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 01:57:51.617358 env[1679]: time="2025-11-01T01:57:51.617338284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 01:57:51.585000 audit[4429]: AVC avc: denied { write } for pid=4429 comm="tee" name="fd" dev="proc" ino=41108 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 01:57:51.717162 kernel: audit: type=1400 audit(1761962271.585:280): avc: denied { write } for pid=4424 comm="tee" name="fd" dev="proc" ino=27453 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 01:57:51.717200 kernel: audit: type=1400 audit(1761962271.585:281): avc: denied { write } for pid=4429 comm="tee" name="fd" dev="proc" ino=41108 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 01:57:51.717223 kernel: audit: type=1400 audit(1761962271.585:279): avc: denied { write } for pid=4420 comm="tee" name="fd" dev="proc" ino=32378 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 01:57:51.585000 audit[4420]: AVC avc: denied { write } for pid=4420 comm="tee" name="fd" dev="proc" ino=32378 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 01:57:51.781276 kernel: audit: type=1400 audit(1761962271.585:282): avc: denied { write } for pid=4419 comm="tee" name="fd" dev="proc" ino=33196 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 01:57:51.585000 audit[4419]: AVC avc: denied { write } for pid=4419 comm="tee" name="fd" dev="proc" ino=33196 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 01:57:51.585000 audit[4426]: AVC avc: denied { write } for pid=4426 comm="tee" name="fd" dev="proc" ino=40035 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 01:57:51.585000 audit[4429]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffef36137b7 a2=241 a3=1b6 items=1 ppid=4385 pid=4429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.585000 audit[4419]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdfc4aa7c8 a2=241 a3=1b6 items=1 ppid=4389 pid=4419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.585000 audit[4426]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcfa6d97c9 a2=241 a3=1b6 items=1 ppid=4384 pid=4426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.585000 audit[4420]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd4d3697c7 a2=241 a3=1b6 items=1 ppid=4383 pid=4420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.585000 audit[4424]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc759687c7 a2=241 a3=1b6 items=1 ppid=4382 pid=4424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.585000 audit: CWD cwd="/etc/service/enabled/bird/log" Nov 1 01:57:51.585000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Nov 1 01:57:51.585000 audit: CWD cwd="/etc/service/enabled/cni/log" Nov 1 01:57:51.585000 audit: CWD cwd="/etc/service/enabled/felix/log" Nov 1 01:57:51.585000 audit: CWD cwd="/etc/service/enabled/bird6/log" Nov 1 01:57:51.585000 audit: PATH item=0 name="/dev/fd/63" inode=32375 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:57:51.585000 audit: PATH item=0 name="/dev/fd/63" inode=41105 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:57:51.585000 audit: PATH item=0 name="/dev/fd/63" inode=33193 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:57:51.585000 audit: PATH item=0 name="/dev/fd/63" inode=27450 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:57:51.585000 audit: PATH item=0 name="/dev/fd/63" inode=38147 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:57:51.585000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 01:57:51.585000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 01:57:51.585000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 01:57:51.585000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 01:57:51.585000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 01:57:51.585000 audit[4422]: AVC avc: denied { write } for pid=4422 comm="tee" name="fd" dev="proc" ino=34316 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 01:57:51.585000 audit[4422]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe120ca7b8 a2=241 a3=1b6 items=1 ppid=4386 pid=4422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.585000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Nov 1 01:57:51.585000 audit: PATH item=0 name="/dev/fd/63" inode=34313 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:57:51.585000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 01:57:51.586000 audit[4432]: AVC avc: denied { write } for pid=4432 comm="tee" name="fd" dev="proc" ino=38150 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 01:57:51.586000 audit[4432]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc8ed267c7 a2=241 a3=1b6 items=1 ppid=4387 pid=4432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.586000 audit: CWD cwd="/etc/service/enabled/confd/log" Nov 1 01:57:51.586000 audit: PATH item=0 name="/dev/fd/63" inode=37133 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:57:51.586000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 01:57:51.730000 audit[4532]: AVC avc: denied { bpf } for pid=4532 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.730000 audit[4532]: AVC avc: denied { bpf } for pid=4532 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.730000 audit[4532]: AVC avc: denied { perfmon } for pid=4532 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.730000 audit[4532]: AVC avc: denied { perfmon } for pid=4532 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.730000 audit[4532]: AVC avc: denied { perfmon } for pid=4532 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.730000 audit[4532]: AVC avc: denied { perfmon } for pid=4532 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.730000 audit[4532]: AVC avc: denied { perfmon } for pid=4532 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.730000 audit[4532]: AVC avc: denied { bpf } for pid=4532 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.730000 audit[4532]: AVC avc: denied { bpf } for pid=4532 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.730000 audit: BPF prog-id=10 op=LOAD Nov 1 01:57:51.730000 audit[4532]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd450ec210 a2=98 a3=1fffffffffffffff items=0 ppid=4390 pid=4532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.730000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Nov 1 01:57:51.843000 audit: BPF prog-id=10 op=UNLOAD Nov 1 01:57:51.843000 audit[4532]: AVC avc: denied { bpf } for pid=4532 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.843000 audit[4532]: AVC avc: denied { bpf } for pid=4532 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.843000 audit[4532]: AVC avc: denied { perfmon } for pid=4532 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.843000 audit[4532]: AVC avc: denied { perfmon } for pid=4532 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.843000 audit[4532]: AVC avc: denied { perfmon } for pid=4532 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.843000 audit[4532]: AVC avc: denied { perfmon } for pid=4532 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.843000 audit[4532]: AVC avc: denied { perfmon } for pid=4532 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.843000 audit[4532]: AVC avc: denied { bpf } for pid=4532 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.843000 audit[4532]: AVC avc: denied { bpf } for pid=4532 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.843000 audit: BPF prog-id=11 op=LOAD Nov 1 01:57:51.843000 audit[4532]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd450ec0f0 a2=94 a3=3 items=0 ppid=4390 pid=4532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.843000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Nov 1 01:57:51.844000 audit: BPF prog-id=11 op=UNLOAD Nov 1 01:57:51.844000 audit[4532]: AVC avc: denied { bpf } for pid=4532 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.844000 audit[4532]: AVC avc: denied { bpf } for pid=4532 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.844000 audit[4532]: AVC avc: denied { perfmon } for pid=4532 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.844000 audit[4532]: AVC avc: denied { perfmon } for pid=4532 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.844000 audit[4532]: AVC avc: denied { perfmon } for pid=4532 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.844000 audit[4532]: AVC avc: denied { perfmon } for pid=4532 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.844000 audit[4532]: AVC avc: denied { perfmon } for pid=4532 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.844000 audit[4532]: AVC avc: denied { bpf } for pid=4532 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.844000 audit[4532]: AVC avc: denied { bpf } for pid=4532 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.844000 audit: BPF prog-id=12 op=LOAD Nov 1 01:57:51.844000 audit[4532]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd450ec130 a2=94 a3=7ffd450ec310 items=0 ppid=4390 pid=4532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.844000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Nov 1 01:57:51.844000 audit: BPF prog-id=12 op=UNLOAD Nov 1 01:57:51.844000 audit[4532]: AVC avc: denied { perfmon } for pid=4532 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.844000 audit[4532]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7ffd450ec200 a2=50 a3=a000000085 items=0 ppid=4390 pid=4532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.844000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Nov 1 01:57:51.844000 audit[4533]: AVC avc: denied { bpf } for pid=4533 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.844000 audit[4533]: AVC avc: denied { bpf } for pid=4533 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.844000 audit[4533]: AVC avc: denied { perfmon } for pid=4533 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.844000 audit[4533]: AVC avc: denied { perfmon } for pid=4533 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.844000 audit[4533]: AVC avc: denied { perfmon } for pid=4533 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.844000 audit[4533]: AVC avc: denied { perfmon } for pid=4533 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.844000 audit[4533]: AVC avc: denied { perfmon } for pid=4533 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.844000 audit[4533]: AVC avc: denied { bpf } for pid=4533 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.844000 audit[4533]: AVC avc: denied { bpf } for pid=4533 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.844000 audit: BPF prog-id=13 op=LOAD Nov 1 01:57:51.844000 audit[4533]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffef91d83d0 a2=98 a3=3 items=0 ppid=4390 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.844000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:57:51.844000 audit: BPF prog-id=13 op=UNLOAD Nov 1 01:57:51.844000 audit[4533]: AVC avc: denied { bpf } for pid=4533 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.844000 audit[4533]: AVC avc: denied { bpf } for pid=4533 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.844000 audit[4533]: AVC avc: denied { perfmon } for pid=4533 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.844000 audit[4533]: AVC avc: denied { perfmon } for pid=4533 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.844000 audit[4533]: AVC avc: denied { perfmon } for pid=4533 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.844000 audit[4533]: AVC avc: denied { perfmon } for pid=4533 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.844000 audit[4533]: AVC avc: denied { perfmon } for pid=4533 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.844000 audit[4533]: AVC avc: denied { bpf } for pid=4533 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.844000 audit[4533]: AVC avc: denied { bpf } for pid=4533 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.844000 audit: BPF prog-id=14 op=LOAD Nov 1 01:57:51.844000 audit[4533]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffef91d81c0 a2=94 a3=54428f items=0 ppid=4390 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.844000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:57:51.844000 audit: BPF prog-id=14 op=UNLOAD Nov 1 01:57:51.844000 audit[4533]: AVC avc: denied { bpf } for pid=4533 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.844000 audit[4533]: AVC avc: denied { bpf } for pid=4533 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.844000 audit[4533]: AVC avc: denied { perfmon } for pid=4533 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.844000 audit[4533]: AVC avc: denied { perfmon } for pid=4533 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.844000 audit[4533]: AVC avc: denied { perfmon } for pid=4533 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.844000 audit[4533]: AVC avc: denied { perfmon } for pid=4533 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.844000 audit[4533]: AVC avc: denied { perfmon } for pid=4533 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.844000 audit[4533]: AVC avc: denied { bpf } for pid=4533 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.844000 audit[4533]: AVC avc: denied { bpf } for pid=4533 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.844000 audit: BPF prog-id=15 op=LOAD Nov 1 01:57:51.844000 audit[4533]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffef91d81f0 a2=94 a3=2 items=0 ppid=4390 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.844000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:57:51.844000 audit: BPF prog-id=15 op=UNLOAD Nov 1 01:57:51.932000 audit[4533]: AVC avc: denied { bpf } for pid=4533 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.932000 audit[4533]: AVC avc: denied { bpf } for pid=4533 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.932000 audit[4533]: AVC avc: denied { perfmon } for pid=4533 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.932000 audit[4533]: AVC avc: denied { perfmon } for pid=4533 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.932000 audit[4533]: AVC avc: denied { perfmon } for pid=4533 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.932000 audit[4533]: AVC avc: denied { perfmon } for pid=4533 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.932000 audit[4533]: AVC avc: denied { perfmon } for pid=4533 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.932000 audit[4533]: AVC avc: denied { bpf } for pid=4533 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.932000 audit[4533]: AVC avc: denied { bpf } for pid=4533 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.932000 audit: BPF prog-id=16 op=LOAD Nov 1 01:57:51.932000 audit[4533]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffef91d80b0 a2=94 a3=1 items=0 ppid=4390 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.932000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:57:51.932000 audit: BPF prog-id=16 op=UNLOAD Nov 1 01:57:51.932000 audit[4533]: AVC avc: denied { perfmon } for pid=4533 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.932000 audit[4533]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffef91d8180 a2=50 a3=7ffef91d8260 items=0 ppid=4390 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.932000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:57:51.938000 audit[4533]: AVC avc: denied { bpf } for pid=4533 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.938000 audit[4533]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffef91d80c0 a2=28 a3=0 items=0 ppid=4390 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.938000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:57:51.938000 audit[4533]: AVC avc: denied { bpf } for pid=4533 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.938000 audit[4533]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffef91d80f0 a2=28 a3=0 items=0 ppid=4390 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.938000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:57:51.938000 audit[4533]: AVC avc: denied { bpf } for pid=4533 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.938000 audit[4533]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffef91d8000 a2=28 a3=0 items=0 ppid=4390 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.938000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:57:51.938000 audit[4533]: AVC avc: denied { bpf } for pid=4533 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.938000 audit[4533]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffef91d8110 a2=28 a3=0 items=0 ppid=4390 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.938000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { bpf } for pid=4533 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffef91d80f0 a2=28 a3=0 items=0 ppid=4390 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.939000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { bpf } for pid=4533 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffef91d80e0 a2=28 a3=0 items=0 ppid=4390 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.939000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { bpf } for pid=4533 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffef91d8110 a2=28 a3=0 items=0 ppid=4390 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.939000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { bpf } for pid=4533 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffef91d80f0 a2=28 a3=0 items=0 ppid=4390 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.939000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { bpf } for pid=4533 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffef91d8110 a2=28 a3=0 items=0 ppid=4390 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.939000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { bpf } for pid=4533 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffef91d80e0 a2=28 a3=0 items=0 ppid=4390 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.939000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { bpf } for pid=4533 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffef91d8150 a2=28 a3=0 items=0 ppid=4390 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.939000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { perfmon } for pid=4533 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffef91d7f00 a2=50 a3=1 items=0 ppid=4390 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.939000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { bpf } for pid=4533 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { bpf } for pid=4533 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { perfmon } for pid=4533 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { perfmon } for pid=4533 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { perfmon } for pid=4533 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { perfmon } for pid=4533 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { perfmon } for pid=4533 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { bpf } for pid=4533 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { bpf } for pid=4533 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit: BPF prog-id=17 op=LOAD Nov 1 01:57:51.939000 audit[4533]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffef91d7f00 a2=94 a3=5 items=0 ppid=4390 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.939000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:57:51.939000 audit: BPF prog-id=17 op=UNLOAD Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { perfmon } for pid=4533 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffef91d7fb0 a2=50 a3=1 items=0 ppid=4390 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.939000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { bpf } for pid=4533 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffef91d80d0 a2=4 a3=38 items=0 ppid=4390 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.939000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { bpf } for pid=4533 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { bpf } for pid=4533 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { perfmon } for pid=4533 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { bpf } for pid=4533 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { perfmon } for pid=4533 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { perfmon } for pid=4533 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { perfmon } for pid=4533 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { perfmon } for pid=4533 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { perfmon } for pid=4533 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { bpf } for pid=4533 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { confidentiality } for pid=4533 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 01:57:51.939000 audit[4533]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffef91d8120 a2=94 a3=6 items=0 ppid=4390 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.939000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { bpf } for pid=4533 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { bpf } for pid=4533 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { perfmon } for pid=4533 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { bpf } for pid=4533 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { perfmon } for pid=4533 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { perfmon } for pid=4533 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { perfmon } for pid=4533 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { perfmon } for pid=4533 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { perfmon } for pid=4533 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { bpf } for pid=4533 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { confidentiality } for pid=4533 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 01:57:51.939000 audit[4533]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffef91d78d0 a2=94 a3=88 items=0 ppid=4390 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.939000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { bpf } for pid=4533 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { bpf } for pid=4533 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { perfmon } for pid=4533 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { bpf } for pid=4533 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { perfmon } for pid=4533 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { perfmon } for pid=4533 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { perfmon } for pid=4533 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { perfmon } for pid=4533 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { perfmon } for pid=4533 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { bpf } for pid=4533 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.939000 audit[4533]: AVC avc: denied { confidentiality } for pid=4533 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 01:57:51.939000 audit[4533]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffef91d78d0 a2=94 a3=88 items=0 ppid=4390 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.939000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:57:51.943000 audit[4536]: AVC avc: denied { bpf } for pid=4536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.943000 audit[4536]: AVC avc: denied { bpf } for pid=4536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.943000 audit[4536]: AVC avc: denied { perfmon } for pid=4536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.943000 audit[4536]: AVC avc: denied { perfmon } for pid=4536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.943000 audit[4536]: AVC avc: denied { perfmon } for pid=4536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.943000 audit[4536]: AVC avc: denied { perfmon } for pid=4536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.943000 audit[4536]: AVC avc: denied { perfmon } for pid=4536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.943000 audit[4536]: AVC avc: denied { bpf } for pid=4536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.943000 audit[4536]: AVC avc: denied { bpf } for pid=4536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.943000 audit: BPF prog-id=18 op=LOAD Nov 1 01:57:51.943000 audit[4536]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd32076430 a2=98 a3=1999999999999999 items=0 ppid=4390 pid=4536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.943000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Nov 1 01:57:51.943000 audit: BPF prog-id=18 op=UNLOAD Nov 1 01:57:51.943000 audit[4536]: AVC avc: denied { bpf } for pid=4536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.943000 audit[4536]: AVC avc: denied { bpf } for pid=4536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.943000 audit[4536]: AVC avc: denied { perfmon } for pid=4536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.943000 audit[4536]: AVC avc: denied { perfmon } for pid=4536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.943000 audit[4536]: AVC avc: denied { perfmon } for pid=4536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.943000 audit[4536]: AVC avc: denied { perfmon } for pid=4536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.943000 audit[4536]: AVC avc: denied { perfmon } for pid=4536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.943000 audit[4536]: AVC avc: denied { bpf } for pid=4536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.943000 audit[4536]: AVC avc: denied { bpf } for pid=4536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.943000 audit: BPF prog-id=19 op=LOAD Nov 1 01:57:51.943000 audit[4536]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd32076310 a2=94 a3=ffff items=0 ppid=4390 pid=4536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.943000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Nov 1 01:57:51.943000 audit: BPF prog-id=19 op=UNLOAD Nov 1 01:57:51.943000 audit[4536]: AVC avc: denied { bpf } for pid=4536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.943000 audit[4536]: AVC avc: denied { bpf } for pid=4536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.943000 audit[4536]: AVC avc: denied { perfmon } for pid=4536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.943000 audit[4536]: AVC avc: denied { perfmon } for pid=4536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.943000 audit[4536]: AVC avc: denied { perfmon } for pid=4536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.943000 audit[4536]: AVC avc: denied { perfmon } for pid=4536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.943000 audit[4536]: AVC avc: denied { perfmon } for pid=4536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.943000 audit[4536]: AVC avc: denied { bpf } for pid=4536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.943000 audit[4536]: AVC avc: denied { bpf } for pid=4536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.943000 audit: BPF prog-id=20 op=LOAD Nov 1 01:57:51.943000 audit[4536]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd32076350 a2=94 a3=7ffd32076530 items=0 ppid=4390 pid=4536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.943000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Nov 1 01:57:51.943000 audit: BPF prog-id=20 op=UNLOAD Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { bpf } for pid=4559 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { bpf } for pid=4559 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { perfmon } for pid=4559 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { perfmon } for pid=4559 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { perfmon } for pid=4559 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { perfmon } for pid=4559 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { perfmon } for pid=4559 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { bpf } for pid=4559 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { bpf } for pid=4559 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit: BPF prog-id=21 op=LOAD Nov 1 01:57:51.971000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc8c9c54c0 a2=98 a3=0 items=0 ppid=4390 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.971000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:57:51.971000 audit: BPF prog-id=21 op=UNLOAD Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { bpf } for pid=4559 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { bpf } for pid=4559 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { perfmon } for pid=4559 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { perfmon } for pid=4559 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { perfmon } for pid=4559 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { perfmon } for pid=4559 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { perfmon } for pid=4559 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { bpf } for pid=4559 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { bpf } for pid=4559 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit: BPF prog-id=22 op=LOAD Nov 1 01:57:51.971000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc8c9c52d0 a2=94 a3=54428f items=0 ppid=4390 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.971000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:57:51.971000 audit: BPF prog-id=22 op=UNLOAD Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { bpf } for pid=4559 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { bpf } for pid=4559 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { perfmon } for pid=4559 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { perfmon } for pid=4559 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { perfmon } for pid=4559 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { perfmon } for pid=4559 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { perfmon } for pid=4559 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { bpf } for pid=4559 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { bpf } for pid=4559 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit: BPF prog-id=23 op=LOAD Nov 1 01:57:51.971000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc8c9c5300 a2=94 a3=2 items=0 ppid=4390 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.971000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:57:51.971000 audit: BPF prog-id=23 op=UNLOAD Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { bpf } for pid=4559 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc8c9c51d0 a2=28 a3=0 items=0 ppid=4390 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.971000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { bpf } for pid=4559 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc8c9c5200 a2=28 a3=0 items=0 ppid=4390 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.971000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { bpf } for pid=4559 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc8c9c5110 a2=28 a3=0 items=0 ppid=4390 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.971000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { bpf } for pid=4559 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc8c9c5220 a2=28 a3=0 items=0 ppid=4390 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.971000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { bpf } for pid=4559 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc8c9c5200 a2=28 a3=0 items=0 ppid=4390 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.971000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { bpf } for pid=4559 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc8c9c51f0 a2=28 a3=0 items=0 ppid=4390 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.971000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { bpf } for pid=4559 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc8c9c5220 a2=28 a3=0 items=0 ppid=4390 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.971000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { bpf } for pid=4559 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc8c9c5200 a2=28 a3=0 items=0 ppid=4390 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.971000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { bpf } for pid=4559 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc8c9c5220 a2=28 a3=0 items=0 ppid=4390 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.971000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { bpf } for pid=4559 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc8c9c51f0 a2=28 a3=0 items=0 ppid=4390 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.971000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { bpf } for pid=4559 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc8c9c5260 a2=28 a3=0 items=0 ppid=4390 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.971000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { bpf } for pid=4559 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { bpf } for pid=4559 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { perfmon } for pid=4559 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { perfmon } for pid=4559 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { perfmon } for pid=4559 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { perfmon } for pid=4559 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { perfmon } for pid=4559 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { bpf } for pid=4559 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { bpf } for pid=4559 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit: BPF prog-id=24 op=LOAD Nov 1 01:57:51.971000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc8c9c50d0 a2=94 a3=0 items=0 ppid=4390 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.971000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:57:51.971000 audit: BPF prog-id=24 op=UNLOAD Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { bpf } for pid=4559 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffc8c9c50c0 a2=50 a3=2800 items=0 ppid=4390 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.971000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { bpf } for pid=4559 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffc8c9c50c0 a2=50 a3=2800 items=0 ppid=4390 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.971000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { bpf } for pid=4559 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { bpf } for pid=4559 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { bpf } for pid=4559 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { perfmon } for pid=4559 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { perfmon } for pid=4559 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { perfmon } for pid=4559 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { perfmon } for pid=4559 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { perfmon } for pid=4559 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { bpf } for pid=4559 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { bpf } for pid=4559 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit: BPF prog-id=25 op=LOAD Nov 1 01:57:51.971000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc8c9c48e0 a2=94 a3=2 items=0 ppid=4390 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.971000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:57:51.971000 audit: BPF prog-id=25 op=UNLOAD Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { bpf } for pid=4559 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { bpf } for pid=4559 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { bpf } for pid=4559 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { perfmon } for pid=4559 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { perfmon } for pid=4559 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { perfmon } for pid=4559 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { perfmon } for pid=4559 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { perfmon } for pid=4559 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { bpf } for pid=4559 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit[4559]: AVC avc: denied { bpf } for pid=4559 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.971000 audit: BPF prog-id=26 op=LOAD Nov 1 01:57:51.971000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc8c9c49e0 a2=94 a3=30 items=0 ppid=4390 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.971000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:57:51.973000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.973000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.973000 audit[4566]: AVC avc: denied { perfmon } for pid=4566 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.973000 audit[4566]: AVC avc: denied { perfmon } for pid=4566 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.973000 audit[4566]: AVC avc: denied { perfmon } for pid=4566 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.973000 audit[4566]: AVC avc: denied { perfmon } for pid=4566 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.973000 audit[4566]: AVC avc: denied { perfmon } for pid=4566 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.973000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.973000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.973000 audit: BPF prog-id=27 op=LOAD Nov 1 01:57:51.973000 audit[4566]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe2cabcec0 a2=98 a3=0 items=0 ppid=4390 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.973000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:57:51.973000 audit: BPF prog-id=27 op=UNLOAD Nov 1 01:57:51.973000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.973000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.973000 audit[4566]: AVC avc: denied { perfmon } for pid=4566 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.973000 audit[4566]: AVC avc: denied { perfmon } for pid=4566 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.973000 audit[4566]: AVC avc: denied { perfmon } for pid=4566 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.973000 audit[4566]: AVC avc: denied { perfmon } for pid=4566 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.973000 audit[4566]: AVC avc: denied { perfmon } for pid=4566 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.973000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.973000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.973000 audit: BPF prog-id=28 op=LOAD Nov 1 01:57:51.973000 audit[4566]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe2cabccb0 a2=94 a3=54428f items=0 ppid=4390 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.973000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:57:51.973000 audit: BPF prog-id=28 op=UNLOAD Nov 1 01:57:51.973000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.973000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.973000 audit[4566]: AVC avc: denied { perfmon } for pid=4566 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.973000 audit[4566]: AVC avc: denied { perfmon } for pid=4566 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.973000 audit[4566]: AVC avc: denied { perfmon } for pid=4566 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.973000 audit[4566]: AVC avc: denied { perfmon } for pid=4566 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.973000 audit[4566]: AVC avc: denied { perfmon } for pid=4566 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.973000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.973000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:51.973000 audit: BPF prog-id=29 op=LOAD Nov 1 01:57:51.973000 audit[4566]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe2cabcce0 a2=94 a3=2 items=0 ppid=4390 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:51.973000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:57:51.973000 audit: BPF prog-id=29 op=UNLOAD Nov 1 01:57:51.972865 systemd-networkd[1345]: vxlan.calico: Link UP Nov 1 01:57:51.972868 systemd-networkd[1345]: vxlan.calico: Gained carrier Nov 1 01:57:51.981629 env[1679]: time="2025-11-01T01:57:51.981571833Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:57:51.981956 env[1679]: time="2025-11-01T01:57:51.981912057Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 01:57:51.982061 kubelet[2679]: E1101 01:57:51.982017 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:57:51.982061 kubelet[2679]: E1101 01:57:51.982045 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:57:51.982129 kubelet[2679]: E1101 01:57:51.982111 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pgwpl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b79df9786-ds9vj_calico-system(aabe0a9d-10db-49d2-a1d8-2a8011591b5d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 01:57:51.983252 kubelet[2679]: E1101 01:57:51.983208 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b79df9786-ds9vj" podUID="aabe0a9d-10db-49d2-a1d8-2a8011591b5d" Nov 1 01:57:52.061000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.061000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.061000 audit[4566]: AVC avc: denied { perfmon } for pid=4566 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.061000 audit[4566]: AVC avc: denied { perfmon } for pid=4566 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.061000 audit[4566]: AVC avc: denied { perfmon } for pid=4566 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.061000 audit[4566]: AVC avc: denied { perfmon } for pid=4566 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.061000 audit[4566]: AVC avc: denied { perfmon } for pid=4566 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.061000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.061000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.061000 audit: BPF prog-id=30 op=LOAD Nov 1 01:57:52.061000 audit[4566]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe2cabcba0 a2=94 a3=1 items=0 ppid=4390 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:52.061000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:57:52.062000 audit: BPF prog-id=30 op=UNLOAD Nov 1 01:57:52.062000 audit[4566]: AVC avc: denied { perfmon } for pid=4566 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.062000 audit[4566]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffe2cabcc70 a2=50 a3=7ffe2cabcd50 items=0 ppid=4390 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:52.062000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:57:52.068000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.068000 audit[4566]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe2cabcbb0 a2=28 a3=0 items=0 ppid=4390 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:52.068000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:57:52.068000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.068000 audit[4566]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe2cabcbe0 a2=28 a3=0 items=0 ppid=4390 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:52.068000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:57:52.068000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.068000 audit[4566]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe2cabcaf0 a2=28 a3=0 items=0 ppid=4390 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:52.068000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:57:52.068000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.068000 audit[4566]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe2cabcc00 a2=28 a3=0 items=0 ppid=4390 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:52.068000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:57:52.068000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.068000 audit[4566]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe2cabcbe0 a2=28 a3=0 items=0 ppid=4390 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:52.068000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:57:52.068000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.068000 audit[4566]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe2cabcbd0 a2=28 a3=0 items=0 ppid=4390 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:52.068000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:57:52.068000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.068000 audit[4566]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe2cabcc00 a2=28 a3=0 items=0 ppid=4390 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:52.068000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:57:52.068000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.068000 audit[4566]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe2cabcbe0 a2=28 a3=0 items=0 ppid=4390 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:52.068000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:57:52.068000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.068000 audit[4566]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe2cabcc00 a2=28 a3=0 items=0 ppid=4390 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:52.068000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:57:52.068000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.068000 audit[4566]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe2cabcbd0 a2=28 a3=0 items=0 ppid=4390 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:52.068000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:57:52.068000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.068000 audit[4566]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe2cabcc40 a2=28 a3=0 items=0 ppid=4390 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:52.068000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:57:52.068000 audit[4566]: AVC avc: denied { perfmon } for pid=4566 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.068000 audit[4566]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe2cabc9f0 a2=50 a3=1 items=0 ppid=4390 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:52.068000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:57:52.068000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.068000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.068000 audit[4566]: AVC avc: denied { perfmon } for pid=4566 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.068000 audit[4566]: AVC avc: denied { perfmon } for pid=4566 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.068000 audit[4566]: AVC avc: denied { perfmon } for pid=4566 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.068000 audit[4566]: AVC avc: denied { perfmon } for pid=4566 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.068000 audit[4566]: AVC avc: denied { perfmon } for pid=4566 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.068000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.068000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.068000 audit: BPF prog-id=31 op=LOAD Nov 1 01:57:52.068000 audit[4566]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe2cabc9f0 a2=94 a3=5 items=0 ppid=4390 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:52.068000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:57:52.069000 audit: BPF prog-id=31 op=UNLOAD Nov 1 01:57:52.069000 audit[4566]: AVC avc: denied { perfmon } for pid=4566 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.069000 audit[4566]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe2cabcaa0 a2=50 a3=1 items=0 ppid=4390 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:52.069000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:57:52.069000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.069000 audit[4566]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffe2cabcbc0 a2=4 a3=38 items=0 ppid=4390 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:52.069000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:57:52.069000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.069000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.069000 audit[4566]: AVC avc: denied { perfmon } for pid=4566 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.069000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.069000 audit[4566]: AVC avc: denied { perfmon } for pid=4566 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.069000 audit[4566]: AVC avc: denied { perfmon } for pid=4566 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.069000 audit[4566]: AVC avc: denied { perfmon } for pid=4566 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.069000 audit[4566]: AVC avc: denied { perfmon } for pid=4566 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.069000 audit[4566]: AVC avc: denied { perfmon } for pid=4566 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.069000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.069000 audit[4566]: AVC avc: denied { confidentiality } for pid=4566 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 01:57:52.069000 audit[4566]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe2cabcc10 a2=94 a3=6 items=0 ppid=4390 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:52.069000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:57:52.069000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.069000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.069000 audit[4566]: AVC avc: denied { perfmon } for pid=4566 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.069000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.069000 audit[4566]: AVC avc: denied { perfmon } for pid=4566 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.069000 audit[4566]: AVC avc: denied { perfmon } for pid=4566 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.069000 audit[4566]: AVC avc: denied { perfmon } for pid=4566 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.069000 audit[4566]: AVC avc: denied { perfmon } for pid=4566 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.069000 audit[4566]: AVC avc: denied { perfmon } for pid=4566 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.069000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.069000 audit[4566]: AVC avc: denied { confidentiality } for pid=4566 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 01:57:52.069000 audit[4566]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe2cabc3c0 a2=94 a3=88 items=0 ppid=4390 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:52.069000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:57:52.069000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.069000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.069000 audit[4566]: AVC avc: denied { perfmon } for pid=4566 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.069000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.069000 audit[4566]: AVC avc: denied { perfmon } for pid=4566 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.069000 audit[4566]: AVC avc: denied { perfmon } for pid=4566 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.069000 audit[4566]: AVC avc: denied { perfmon } for pid=4566 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.069000 audit[4566]: AVC avc: denied { perfmon } for pid=4566 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.069000 audit[4566]: AVC avc: denied { perfmon } for pid=4566 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.069000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.069000 audit[4566]: AVC avc: denied { confidentiality } for pid=4566 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 01:57:52.069000 audit[4566]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe2cabc3c0 a2=94 a3=88 items=0 ppid=4390 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:52.069000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:57:52.069000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.069000 audit[4566]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe2cabddf0 a2=10 a3=f8f00800 items=0 ppid=4390 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:52.069000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:57:52.069000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.069000 audit[4566]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe2cabdc90 a2=10 a3=3 items=0 ppid=4390 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:52.069000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:57:52.069000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.069000 audit[4566]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe2cabdc30 a2=10 a3=3 items=0 ppid=4390 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:52.069000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:57:52.069000 audit[4566]: AVC avc: denied { bpf } for pid=4566 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:57:52.069000 audit[4566]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe2cabdc30 a2=10 a3=7 items=0 ppid=4390 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:52.069000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:57:52.080000 audit: BPF prog-id=26 op=UNLOAD Nov 1 01:57:52.205000 audit[4623]: NETFILTER_CFG table=mangle:103 family=2 entries=16 op=nft_register_chain pid=4623 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 01:57:52.205000 audit[4623]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fff7c260930 a2=0 a3=7fff7c26091c items=0 ppid=4390 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:52.205000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 01:57:52.212000 audit[4621]: NETFILTER_CFG table=nat:104 family=2 entries=15 op=nft_register_chain pid=4621 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 01:57:52.212000 audit[4621]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fffa7d2c740 a2=0 a3=7fffa7d2c72c items=0 ppid=4390 pid=4621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:52.212000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 01:57:52.217000 audit[4622]: NETFILTER_CFG table=raw:105 family=2 entries=21 op=nft_register_chain pid=4622 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 01:57:52.217000 audit[4622]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffe2d632120 a2=0 a3=7ffe2d63210c items=0 ppid=4390 pid=4622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:52.217000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 01:57:52.220000 audit[4626]: NETFILTER_CFG table=filter:106 family=2 entries=94 op=nft_register_chain pid=4626 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 01:57:52.220000 audit[4626]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffd19cf0750 a2=0 a3=7ffd19cf073c items=0 ppid=4390 pid=4626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:52.220000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 01:57:52.517367 kubelet[2679]: I1101 01:57:52.517232 2679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f74bb146-b95d-4f2f-8d41-d7fea2ff0e93" path="/var/lib/kubelet/pods/f74bb146-b95d-4f2f-8d41-d7fea2ff0e93/volumes" Nov 1 01:57:52.635121 kubelet[2679]: E1101 01:57:52.635002 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b79df9786-ds9vj" podUID="aabe0a9d-10db-49d2-a1d8-2a8011591b5d" Nov 1 01:57:52.651000 audit[4638]: NETFILTER_CFG table=filter:107 family=2 entries=20 op=nft_register_rule pid=4638 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:57:52.651000 audit[4638]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdd7ab8790 a2=0 a3=7ffdd7ab877c items=0 ppid=2820 pid=4638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:52.651000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:57:52.666000 audit[4638]: NETFILTER_CFG table=nat:108 family=2 entries=14 op=nft_register_rule pid=4638 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:57:52.666000 audit[4638]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdd7ab8790 a2=0 a3=0 items=0 ppid=2820 pid=4638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:52.666000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:57:52.973637 systemd-networkd[1345]: califd1695f732b: Gained IPv6LL Nov 1 01:57:53.613541 systemd-networkd[1345]: vxlan.calico: Gained IPv6LL Nov 1 01:57:54.513641 env[1679]: time="2025-11-01T01:57:54.513503899Z" level=info msg="StopPodSandbox for \"0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0\"" Nov 1 01:57:54.609411 env[1679]: 2025-11-01 01:57:54.582 [INFO][4652] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0" Nov 1 01:57:54.609411 env[1679]: 2025-11-01 01:57:54.582 [INFO][4652] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0" iface="eth0" netns="/var/run/netns/cni-324ccd25-f7ff-d732-59df-17b8476b0875" Nov 1 01:57:54.609411 env[1679]: 2025-11-01 01:57:54.582 [INFO][4652] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0" iface="eth0" netns="/var/run/netns/cni-324ccd25-f7ff-d732-59df-17b8476b0875" Nov 1 01:57:54.609411 env[1679]: 2025-11-01 01:57:54.582 [INFO][4652] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0" iface="eth0" netns="/var/run/netns/cni-324ccd25-f7ff-d732-59df-17b8476b0875" Nov 1 01:57:54.609411 env[1679]: 2025-11-01 01:57:54.582 [INFO][4652] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0" Nov 1 01:57:54.609411 env[1679]: 2025-11-01 01:57:54.582 [INFO][4652] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0" Nov 1 01:57:54.609411 env[1679]: 2025-11-01 01:57:54.599 [INFO][4669] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0" HandleID="k8s-pod-network.0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0" Workload="ci--3510.3.8--n--0f05b56927-k8s-goldmane--666569f655--bm44l-eth0" Nov 1 01:57:54.609411 env[1679]: 2025-11-01 01:57:54.599 [INFO][4669] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:57:54.609411 env[1679]: 2025-11-01 01:57:54.599 [INFO][4669] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:57:54.609411 env[1679]: 2025-11-01 01:57:54.605 [WARNING][4669] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0" HandleID="k8s-pod-network.0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0" Workload="ci--3510.3.8--n--0f05b56927-k8s-goldmane--666569f655--bm44l-eth0" Nov 1 01:57:54.609411 env[1679]: 2025-11-01 01:57:54.605 [INFO][4669] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0" HandleID="k8s-pod-network.0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0" Workload="ci--3510.3.8--n--0f05b56927-k8s-goldmane--666569f655--bm44l-eth0" Nov 1 01:57:54.609411 env[1679]: 2025-11-01 01:57:54.607 [INFO][4669] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:57:54.609411 env[1679]: 2025-11-01 01:57:54.608 [INFO][4652] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0" Nov 1 01:57:54.609853 env[1679]: time="2025-11-01T01:57:54.609487458Z" level=info msg="TearDown network for sandbox \"0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0\" successfully" Nov 1 01:57:54.609853 env[1679]: time="2025-11-01T01:57:54.609514668Z" level=info msg="StopPodSandbox for \"0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0\" returns successfully" Nov 1 01:57:54.610021 env[1679]: time="2025-11-01T01:57:54.609999027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-bm44l,Uid:6bd4bd36-d549-4194-a331-51709a095bb2,Namespace:calico-system,Attempt:1,}" Nov 1 01:57:54.612179 systemd[1]: run-netns-cni\x2d324ccd25\x2df7ff\x2dd732\x2d59df\x2d17b8476b0875.mount: Deactivated successfully. Nov 1 01:57:54.702142 systemd-networkd[1345]: calib9f3fd44435: Link UP Nov 1 01:57:54.758224 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 01:57:54.758254 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calib9f3fd44435: link becomes ready Nov 1 01:57:54.758267 systemd-networkd[1345]: calib9f3fd44435: Gained carrier Nov 1 01:57:54.769150 env[1679]: 2025-11-01 01:57:54.651 [INFO][4687] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--0f05b56927-k8s-goldmane--666569f655--bm44l-eth0 goldmane-666569f655- calico-system 6bd4bd36-d549-4194-a331-51709a095bb2 900 0 2025-11-01 01:57:28 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-3510.3.8-n-0f05b56927 goldmane-666569f655-bm44l eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib9f3fd44435 [] [] }} ContainerID="406f245c8278006346a8e2bcd70c7b1b60f881071123bac8667bc60d5c2976fa" Namespace="calico-system" Pod="goldmane-666569f655-bm44l" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-goldmane--666569f655--bm44l-" Nov 1 01:57:54.769150 env[1679]: 2025-11-01 01:57:54.651 [INFO][4687] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="406f245c8278006346a8e2bcd70c7b1b60f881071123bac8667bc60d5c2976fa" Namespace="calico-system" Pod="goldmane-666569f655-bm44l" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-goldmane--666569f655--bm44l-eth0" Nov 1 01:57:54.769150 env[1679]: 2025-11-01 01:57:54.670 [INFO][4711] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="406f245c8278006346a8e2bcd70c7b1b60f881071123bac8667bc60d5c2976fa" HandleID="k8s-pod-network.406f245c8278006346a8e2bcd70c7b1b60f881071123bac8667bc60d5c2976fa" Workload="ci--3510.3.8--n--0f05b56927-k8s-goldmane--666569f655--bm44l-eth0" Nov 1 01:57:54.769150 env[1679]: 2025-11-01 01:57:54.670 [INFO][4711] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="406f245c8278006346a8e2bcd70c7b1b60f881071123bac8667bc60d5c2976fa" HandleID="k8s-pod-network.406f245c8278006346a8e2bcd70c7b1b60f881071123bac8667bc60d5c2976fa" Workload="ci--3510.3.8--n--0f05b56927-k8s-goldmane--666569f655--bm44l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e9360), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-0f05b56927", "pod":"goldmane-666569f655-bm44l", "timestamp":"2025-11-01 01:57:54.670242957 +0000 UTC"}, Hostname:"ci-3510.3.8-n-0f05b56927", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:57:54.769150 env[1679]: 2025-11-01 01:57:54.670 [INFO][4711] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:57:54.769150 env[1679]: 2025-11-01 01:57:54.670 [INFO][4711] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:57:54.769150 env[1679]: 2025-11-01 01:57:54.670 [INFO][4711] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-0f05b56927' Nov 1 01:57:54.769150 env[1679]: 2025-11-01 01:57:54.677 [INFO][4711] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.406f245c8278006346a8e2bcd70c7b1b60f881071123bac8667bc60d5c2976fa" host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:54.769150 env[1679]: 2025-11-01 01:57:54.681 [INFO][4711] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:54.769150 env[1679]: 2025-11-01 01:57:54.685 [INFO][4711] ipam/ipam.go 511: Trying affinity for 192.168.3.192/26 host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:54.769150 env[1679]: 2025-11-01 01:57:54.687 [INFO][4711] ipam/ipam.go 158: Attempting to load block cidr=192.168.3.192/26 host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:54.769150 env[1679]: 2025-11-01 01:57:54.689 [INFO][4711] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.3.192/26 host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:54.769150 env[1679]: 2025-11-01 01:57:54.689 [INFO][4711] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.3.192/26 handle="k8s-pod-network.406f245c8278006346a8e2bcd70c7b1b60f881071123bac8667bc60d5c2976fa" host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:54.769150 env[1679]: 2025-11-01 01:57:54.690 [INFO][4711] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.406f245c8278006346a8e2bcd70c7b1b60f881071123bac8667bc60d5c2976fa Nov 1 01:57:54.769150 env[1679]: 2025-11-01 01:57:54.694 [INFO][4711] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.3.192/26 handle="k8s-pod-network.406f245c8278006346a8e2bcd70c7b1b60f881071123bac8667bc60d5c2976fa" host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:54.769150 env[1679]: 2025-11-01 01:57:54.698 [INFO][4711] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.3.194/26] block=192.168.3.192/26 handle="k8s-pod-network.406f245c8278006346a8e2bcd70c7b1b60f881071123bac8667bc60d5c2976fa" host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:54.769150 env[1679]: 2025-11-01 01:57:54.698 [INFO][4711] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.3.194/26] handle="k8s-pod-network.406f245c8278006346a8e2bcd70c7b1b60f881071123bac8667bc60d5c2976fa" host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:54.769150 env[1679]: 2025-11-01 01:57:54.699 [INFO][4711] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:57:54.769150 env[1679]: 2025-11-01 01:57:54.699 [INFO][4711] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.3.194/26] IPv6=[] ContainerID="406f245c8278006346a8e2bcd70c7b1b60f881071123bac8667bc60d5c2976fa" HandleID="k8s-pod-network.406f245c8278006346a8e2bcd70c7b1b60f881071123bac8667bc60d5c2976fa" Workload="ci--3510.3.8--n--0f05b56927-k8s-goldmane--666569f655--bm44l-eth0" Nov 1 01:57:54.770469 env[1679]: 2025-11-01 01:57:54.700 [INFO][4687] cni-plugin/k8s.go 418: Populated endpoint ContainerID="406f245c8278006346a8e2bcd70c7b1b60f881071123bac8667bc60d5c2976fa" Namespace="calico-system" Pod="goldmane-666569f655-bm44l" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-goldmane--666569f655--bm44l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0f05b56927-k8s-goldmane--666569f655--bm44l-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"6bd4bd36-d549-4194-a331-51709a095bb2", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 57, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0f05b56927", ContainerID:"", Pod:"goldmane-666569f655-bm44l", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.3.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib9f3fd44435", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:57:54.770469 env[1679]: 2025-11-01 01:57:54.700 [INFO][4687] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.3.194/32] ContainerID="406f245c8278006346a8e2bcd70c7b1b60f881071123bac8667bc60d5c2976fa" Namespace="calico-system" Pod="goldmane-666569f655-bm44l" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-goldmane--666569f655--bm44l-eth0" Nov 1 01:57:54.770469 env[1679]: 2025-11-01 01:57:54.700 [INFO][4687] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib9f3fd44435 ContainerID="406f245c8278006346a8e2bcd70c7b1b60f881071123bac8667bc60d5c2976fa" Namespace="calico-system" Pod="goldmane-666569f655-bm44l" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-goldmane--666569f655--bm44l-eth0" Nov 1 01:57:54.770469 env[1679]: 2025-11-01 01:57:54.758 [INFO][4687] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="406f245c8278006346a8e2bcd70c7b1b60f881071123bac8667bc60d5c2976fa" Namespace="calico-system" Pod="goldmane-666569f655-bm44l" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-goldmane--666569f655--bm44l-eth0" Nov 1 01:57:54.770469 env[1679]: 2025-11-01 01:57:54.758 [INFO][4687] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="406f245c8278006346a8e2bcd70c7b1b60f881071123bac8667bc60d5c2976fa" Namespace="calico-system" Pod="goldmane-666569f655-bm44l" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-goldmane--666569f655--bm44l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0f05b56927-k8s-goldmane--666569f655--bm44l-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"6bd4bd36-d549-4194-a331-51709a095bb2", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 57, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0f05b56927", ContainerID:"406f245c8278006346a8e2bcd70c7b1b60f881071123bac8667bc60d5c2976fa", Pod:"goldmane-666569f655-bm44l", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.3.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib9f3fd44435", MAC:"5e:f4:52:16:a3:52", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:57:54.770469 env[1679]: 2025-11-01 01:57:54.766 [INFO][4687] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="406f245c8278006346a8e2bcd70c7b1b60f881071123bac8667bc60d5c2976fa" Namespace="calico-system" Pod="goldmane-666569f655-bm44l" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-goldmane--666569f655--bm44l-eth0" Nov 1 01:57:54.779692 env[1679]: time="2025-11-01T01:57:54.779600148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:57:54.779692 env[1679]: time="2025-11-01T01:57:54.779647507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:57:54.779692 env[1679]: time="2025-11-01T01:57:54.779663823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:57:54.779919 env[1679]: time="2025-11-01T01:57:54.779816568Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/406f245c8278006346a8e2bcd70c7b1b60f881071123bac8667bc60d5c2976fa pid=4745 runtime=io.containerd.runc.v2 Nov 1 01:57:54.782000 audit[4757]: NETFILTER_CFG table=filter:109 family=2 entries=44 op=nft_register_chain pid=4757 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 01:57:54.810580 kernel: kauditd_printk_skb: 559 callbacks suppressed Nov 1 01:57:54.810657 kernel: audit: type=1325 audit(1761962274.782:390): table=filter:109 family=2 entries=44 op=nft_register_chain pid=4757 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 01:57:54.782000 audit[4757]: SYSCALL arch=c000003e syscall=46 success=yes exit=25180 a0=3 a1=7ffdbae41930 a2=0 a3=7ffdbae4191c items=0 ppid=4390 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:54.964028 kernel: audit: type=1300 audit(1761962274.782:390): arch=c000003e syscall=46 success=yes exit=25180 a0=3 a1=7ffdbae41930 a2=0 a3=7ffdbae4191c items=0 ppid=4390 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:54.964077 kernel: audit: type=1327 audit(1761962274.782:390): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 01:57:54.782000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 01:57:55.041285 env[1679]: time="2025-11-01T01:57:55.041233049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-bm44l,Uid:6bd4bd36-d549-4194-a331-51709a095bb2,Namespace:calico-system,Attempt:1,} returns sandbox id \"406f245c8278006346a8e2bcd70c7b1b60f881071123bac8667bc60d5c2976fa\"" Nov 1 01:57:55.041915 env[1679]: time="2025-11-01T01:57:55.041901603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 01:57:55.376949 env[1679]: time="2025-11-01T01:57:55.376664491Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:57:55.377819 env[1679]: time="2025-11-01T01:57:55.377670916Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 01:57:55.378268 kubelet[2679]: E1101 01:57:55.378176 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:57:55.379162 kubelet[2679]: E1101 01:57:55.378287 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:57:55.379162 kubelet[2679]: E1101 01:57:55.378704 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-45r66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-bm44l_calico-system(6bd4bd36-d549-4194-a331-51709a095bb2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 01:57:55.380414 kubelet[2679]: E1101 01:57:55.380196 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bm44l" podUID="6bd4bd36-d549-4194-a331-51709a095bb2" Nov 1 01:57:55.513507 env[1679]: time="2025-11-01T01:57:55.513377584Z" level=info msg="StopPodSandbox for \"f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e\"" Nov 1 01:57:55.513507 env[1679]: time="2025-11-01T01:57:55.513366693Z" level=info msg="StopPodSandbox for \"5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e\"" Nov 1 01:57:55.611710 env[1679]: 2025-11-01 01:57:55.585 [INFO][4804] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e" Nov 1 01:57:55.611710 env[1679]: 2025-11-01 01:57:55.585 [INFO][4804] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e" iface="eth0" netns="/var/run/netns/cni-46714d64-d154-2d97-9332-f1d377a164a0" Nov 1 01:57:55.611710 env[1679]: 2025-11-01 01:57:55.586 [INFO][4804] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e" iface="eth0" netns="/var/run/netns/cni-46714d64-d154-2d97-9332-f1d377a164a0" Nov 1 01:57:55.611710 env[1679]: 2025-11-01 01:57:55.586 [INFO][4804] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e" iface="eth0" netns="/var/run/netns/cni-46714d64-d154-2d97-9332-f1d377a164a0" Nov 1 01:57:55.611710 env[1679]: 2025-11-01 01:57:55.586 [INFO][4804] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e" Nov 1 01:57:55.611710 env[1679]: 2025-11-01 01:57:55.586 [INFO][4804] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e" Nov 1 01:57:55.611710 env[1679]: 2025-11-01 01:57:55.601 [INFO][4833] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e" HandleID="k8s-pod-network.f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e" Workload="ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--spzcr-eth0" Nov 1 01:57:55.611710 env[1679]: 2025-11-01 01:57:55.601 [INFO][4833] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:57:55.611710 env[1679]: 2025-11-01 01:57:55.601 [INFO][4833] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:57:55.611710 env[1679]: 2025-11-01 01:57:55.608 [WARNING][4833] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e" HandleID="k8s-pod-network.f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e" Workload="ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--spzcr-eth0" Nov 1 01:57:55.611710 env[1679]: 2025-11-01 01:57:55.608 [INFO][4833] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e" HandleID="k8s-pod-network.f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e" Workload="ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--spzcr-eth0" Nov 1 01:57:55.611710 env[1679]: 2025-11-01 01:57:55.609 [INFO][4833] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:57:55.611710 env[1679]: 2025-11-01 01:57:55.610 [INFO][4804] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e" Nov 1 01:57:55.612158 env[1679]: time="2025-11-01T01:57:55.611845604Z" level=info msg="TearDown network for sandbox \"f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e\" successfully" Nov 1 01:57:55.612158 env[1679]: time="2025-11-01T01:57:55.611882358Z" level=info msg="StopPodSandbox for \"f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e\" returns successfully" Nov 1 01:57:55.612547 env[1679]: time="2025-11-01T01:57:55.612488199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-spzcr,Uid:183529c2-fd5c-4a2e-b002-133e45559e04,Namespace:kube-system,Attempt:1,}" Nov 1 01:57:55.615382 systemd[1]: run-netns-cni\x2d46714d64\x2dd154\x2d2d97\x2d9332\x2df1d377a164a0.mount: Deactivated successfully. Nov 1 01:57:55.618378 env[1679]: 2025-11-01 01:57:55.585 [INFO][4803] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e" Nov 1 01:57:55.618378 env[1679]: 2025-11-01 01:57:55.585 [INFO][4803] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e" iface="eth0" netns="/var/run/netns/cni-4bc435d2-c38e-f759-e26f-90fea030f75d" Nov 1 01:57:55.618378 env[1679]: 2025-11-01 01:57:55.586 [INFO][4803] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e" iface="eth0" netns="/var/run/netns/cni-4bc435d2-c38e-f759-e26f-90fea030f75d" Nov 1 01:57:55.618378 env[1679]: 2025-11-01 01:57:55.586 [INFO][4803] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e" iface="eth0" netns="/var/run/netns/cni-4bc435d2-c38e-f759-e26f-90fea030f75d" Nov 1 01:57:55.618378 env[1679]: 2025-11-01 01:57:55.586 [INFO][4803] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e" Nov 1 01:57:55.618378 env[1679]: 2025-11-01 01:57:55.586 [INFO][4803] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e" Nov 1 01:57:55.618378 env[1679]: 2025-11-01 01:57:55.601 [INFO][4834] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e" HandleID="k8s-pod-network.5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e" Workload="ci--3510.3.8--n--0f05b56927-k8s-calico--kube--controllers--5ddd8b55c8--kbtkg-eth0" Nov 1 01:57:55.618378 env[1679]: 2025-11-01 01:57:55.602 [INFO][4834] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:57:55.618378 env[1679]: 2025-11-01 01:57:55.609 [INFO][4834] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:57:55.618378 env[1679]: 2025-11-01 01:57:55.614 [WARNING][4834] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e" HandleID="k8s-pod-network.5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e" Workload="ci--3510.3.8--n--0f05b56927-k8s-calico--kube--controllers--5ddd8b55c8--kbtkg-eth0" Nov 1 01:57:55.618378 env[1679]: 2025-11-01 01:57:55.614 [INFO][4834] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e" HandleID="k8s-pod-network.5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e" Workload="ci--3510.3.8--n--0f05b56927-k8s-calico--kube--controllers--5ddd8b55c8--kbtkg-eth0" Nov 1 01:57:55.618378 env[1679]: 2025-11-01 01:57:55.616 [INFO][4834] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:57:55.618378 env[1679]: 2025-11-01 01:57:55.617 [INFO][4803] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e" Nov 1 01:57:55.618934 env[1679]: time="2025-11-01T01:57:55.618459752Z" level=info msg="TearDown network for sandbox \"5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e\" successfully" Nov 1 01:57:55.618934 env[1679]: time="2025-11-01T01:57:55.618482392Z" level=info msg="StopPodSandbox for \"5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e\" returns successfully" Nov 1 01:57:55.619010 env[1679]: time="2025-11-01T01:57:55.618977896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5ddd8b55c8-kbtkg,Uid:792abee8-a81f-4cb1-9ede-47798a35f0b4,Namespace:calico-system,Attempt:1,}" Nov 1 01:57:55.625005 systemd[1]: run-netns-cni\x2d4bc435d2\x2dc38e\x2df759\x2de26f\x2d90fea030f75d.mount: Deactivated successfully. Nov 1 01:57:55.640597 kubelet[2679]: E1101 01:57:55.640487 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bm44l" podUID="6bd4bd36-d549-4194-a331-51709a095bb2" Nov 1 01:57:55.657000 audit[4928]: NETFILTER_CFG table=filter:110 family=2 entries=20 op=nft_register_rule pid=4928 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:57:55.657000 audit[4928]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffffd184da0 a2=0 a3=7ffffd184d8c items=0 ppid=2820 pid=4928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:55.790478 systemd-networkd[1345]: calib9f3fd44435: Gained IPv6LL Nov 1 01:57:55.812532 kernel: audit: type=1325 audit(1761962275.657:391): table=filter:110 family=2 entries=20 op=nft_register_rule pid=4928 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:57:55.812591 kernel: audit: type=1300 audit(1761962275.657:391): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffffd184da0 a2=0 a3=7ffffd184d8c items=0 ppid=2820 pid=4928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:55.812606 kernel: audit: type=1327 audit(1761962275.657:391): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:57:55.657000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:57:55.826300 systemd-networkd[1345]: cali759051e6e86: Link UP Nov 1 01:57:55.870334 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 01:57:55.871000 audit[4928]: NETFILTER_CFG table=nat:111 family=2 entries=14 op=nft_register_rule pid=4928 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:57:55.953571 kernel: audit: type=1325 audit(1761962275.871:392): table=nat:111 family=2 entries=14 op=nft_register_rule pid=4928 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:57:55.953611 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali759051e6e86: link becomes ready Nov 1 01:57:55.953635 kernel: audit: type=1300 audit(1761962275.871:392): arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffffd184da0 a2=0 a3=0 items=0 ppid=2820 pid=4928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:55.871000 audit[4928]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffffd184da0 a2=0 a3=0 items=0 ppid=2820 pid=4928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:55.981513 systemd-networkd[1345]: cali759051e6e86: Gained carrier Nov 1 01:57:55.988432 env[1679]: 2025-11-01 01:57:55.642 [INFO][4862] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--spzcr-eth0 coredns-668d6bf9bc- kube-system 183529c2-fd5c-4a2e-b002-133e45559e04 912 0 2025-11-01 01:57:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.8-n-0f05b56927 coredns-668d6bf9bc-spzcr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali759051e6e86 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="17acacc99b88c70c5549b73d5c9f497f61cf120f12787cf7cd20725c16309207" Namespace="kube-system" Pod="coredns-668d6bf9bc-spzcr" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--spzcr-" Nov 1 01:57:55.988432 env[1679]: 2025-11-01 01:57:55.642 [INFO][4862] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="17acacc99b88c70c5549b73d5c9f497f61cf120f12787cf7cd20725c16309207" Namespace="kube-system" Pod="coredns-668d6bf9bc-spzcr" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--spzcr-eth0" Nov 1 01:57:55.988432 env[1679]: 2025-11-01 01:57:55.659 [INFO][4907] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="17acacc99b88c70c5549b73d5c9f497f61cf120f12787cf7cd20725c16309207" HandleID="k8s-pod-network.17acacc99b88c70c5549b73d5c9f497f61cf120f12787cf7cd20725c16309207" Workload="ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--spzcr-eth0" Nov 1 01:57:55.988432 env[1679]: 2025-11-01 01:57:55.659 [INFO][4907] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="17acacc99b88c70c5549b73d5c9f497f61cf120f12787cf7cd20725c16309207" HandleID="k8s-pod-network.17acacc99b88c70c5549b73d5c9f497f61cf120f12787cf7cd20725c16309207" Workload="ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--spzcr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f600), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.8-n-0f05b56927", "pod":"coredns-668d6bf9bc-spzcr", "timestamp":"2025-11-01 01:57:55.659476612 +0000 UTC"}, Hostname:"ci-3510.3.8-n-0f05b56927", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:57:55.988432 env[1679]: 2025-11-01 01:57:55.659 [INFO][4907] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:57:55.988432 env[1679]: 2025-11-01 01:57:55.659 [INFO][4907] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:57:55.988432 env[1679]: 2025-11-01 01:57:55.659 [INFO][4907] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-0f05b56927' Nov 1 01:57:55.988432 env[1679]: 2025-11-01 01:57:55.665 [INFO][4907] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.17acacc99b88c70c5549b73d5c9f497f61cf120f12787cf7cd20725c16309207" host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:55.988432 env[1679]: 2025-11-01 01:57:55.669 [INFO][4907] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:55.988432 env[1679]: 2025-11-01 01:57:55.815 [INFO][4907] ipam/ipam.go 511: Trying affinity for 192.168.3.192/26 host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:55.988432 env[1679]: 2025-11-01 01:57:55.816 [INFO][4907] ipam/ipam.go 158: Attempting to load block cidr=192.168.3.192/26 host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:55.988432 env[1679]: 2025-11-01 01:57:55.817 [INFO][4907] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.3.192/26 host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:55.988432 env[1679]: 2025-11-01 01:57:55.817 [INFO][4907] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.3.192/26 handle="k8s-pod-network.17acacc99b88c70c5549b73d5c9f497f61cf120f12787cf7cd20725c16309207" host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:55.988432 env[1679]: 2025-11-01 01:57:55.818 [INFO][4907] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.17acacc99b88c70c5549b73d5c9f497f61cf120f12787cf7cd20725c16309207 Nov 1 01:57:55.988432 env[1679]: 2025-11-01 01:57:55.821 [INFO][4907] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.3.192/26 handle="k8s-pod-network.17acacc99b88c70c5549b73d5c9f497f61cf120f12787cf7cd20725c16309207" host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:55.988432 env[1679]: 2025-11-01 01:57:55.824 [INFO][4907] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.3.195/26] block=192.168.3.192/26 handle="k8s-pod-network.17acacc99b88c70c5549b73d5c9f497f61cf120f12787cf7cd20725c16309207" host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:55.988432 env[1679]: 2025-11-01 01:57:55.824 [INFO][4907] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.3.195/26] handle="k8s-pod-network.17acacc99b88c70c5549b73d5c9f497f61cf120f12787cf7cd20725c16309207" host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:55.988432 env[1679]: 2025-11-01 01:57:55.824 [INFO][4907] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:57:55.988432 env[1679]: 2025-11-01 01:57:55.824 [INFO][4907] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.3.195/26] IPv6=[] ContainerID="17acacc99b88c70c5549b73d5c9f497f61cf120f12787cf7cd20725c16309207" HandleID="k8s-pod-network.17acacc99b88c70c5549b73d5c9f497f61cf120f12787cf7cd20725c16309207" Workload="ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--spzcr-eth0" Nov 1 01:57:55.988906 env[1679]: 2025-11-01 01:57:55.825 [INFO][4862] cni-plugin/k8s.go 418: Populated endpoint ContainerID="17acacc99b88c70c5549b73d5c9f497f61cf120f12787cf7cd20725c16309207" Namespace="kube-system" Pod="coredns-668d6bf9bc-spzcr" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--spzcr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--spzcr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"183529c2-fd5c-4a2e-b002-133e45559e04", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 57, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0f05b56927", ContainerID:"", Pod:"coredns-668d6bf9bc-spzcr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali759051e6e86", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:57:55.988906 env[1679]: 2025-11-01 01:57:55.825 [INFO][4862] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.3.195/32] ContainerID="17acacc99b88c70c5549b73d5c9f497f61cf120f12787cf7cd20725c16309207" Namespace="kube-system" Pod="coredns-668d6bf9bc-spzcr" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--spzcr-eth0" Nov 1 01:57:55.988906 env[1679]: 2025-11-01 01:57:55.825 [INFO][4862] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali759051e6e86 ContainerID="17acacc99b88c70c5549b73d5c9f497f61cf120f12787cf7cd20725c16309207" Namespace="kube-system" Pod="coredns-668d6bf9bc-spzcr" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--spzcr-eth0" Nov 1 01:57:55.988906 env[1679]: 2025-11-01 01:57:55.981 [INFO][4862] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="17acacc99b88c70c5549b73d5c9f497f61cf120f12787cf7cd20725c16309207" Namespace="kube-system" Pod="coredns-668d6bf9bc-spzcr" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--spzcr-eth0" Nov 1 01:57:55.988906 env[1679]: 2025-11-01 01:57:55.981 [INFO][4862] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="17acacc99b88c70c5549b73d5c9f497f61cf120f12787cf7cd20725c16309207" Namespace="kube-system" Pod="coredns-668d6bf9bc-spzcr" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--spzcr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--spzcr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"183529c2-fd5c-4a2e-b002-133e45559e04", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 57, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0f05b56927", ContainerID:"17acacc99b88c70c5549b73d5c9f497f61cf120f12787cf7cd20725c16309207", Pod:"coredns-668d6bf9bc-spzcr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali759051e6e86", MAC:"fe:68:53:c8:1c:0d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:57:55.988906 env[1679]: 2025-11-01 01:57:55.987 [INFO][4862] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="17acacc99b88c70c5549b73d5c9f497f61cf120f12787cf7cd20725c16309207" Namespace="kube-system" Pod="coredns-668d6bf9bc-spzcr" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--spzcr-eth0" Nov 1 01:57:55.993355 env[1679]: time="2025-11-01T01:57:55.993309306Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:57:55.993355 env[1679]: time="2025-11-01T01:57:55.993341896Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:57:55.993355 env[1679]: time="2025-11-01T01:57:55.993349736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:57:55.993514 env[1679]: time="2025-11-01T01:57:55.993422392Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/17acacc99b88c70c5549b73d5c9f497f61cf120f12787cf7cd20725c16309207 pid=4954 runtime=io.containerd.runc.v2 Nov 1 01:57:55.993455 systemd-networkd[1345]: calid3a6d4e0f08: Link UP Nov 1 01:57:55.871000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:57:56.102206 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calid3a6d4e0f08: link becomes ready Nov 1 01:57:56.102236 kernel: audit: type=1327 audit(1761962275.871:392): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:57:56.102306 systemd-networkd[1345]: calid3a6d4e0f08: Gained carrier Nov 1 01:57:56.123368 env[1679]: 2025-11-01 01:57:55.649 [INFO][4873] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--0f05b56927-k8s-calico--kube--controllers--5ddd8b55c8--kbtkg-eth0 calico-kube-controllers-5ddd8b55c8- calico-system 792abee8-a81f-4cb1-9ede-47798a35f0b4 913 0 2025-11-01 01:57:30 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5ddd8b55c8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3510.3.8-n-0f05b56927 calico-kube-controllers-5ddd8b55c8-kbtkg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid3a6d4e0f08 [] [] }} ContainerID="26940886dcc506224697f55b91ba12b68a01c40ff8bf08e469863efd5f272c09" Namespace="calico-system" Pod="calico-kube-controllers-5ddd8b55c8-kbtkg" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-calico--kube--controllers--5ddd8b55c8--kbtkg-" Nov 1 01:57:56.123368 env[1679]: 2025-11-01 01:57:55.649 [INFO][4873] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="26940886dcc506224697f55b91ba12b68a01c40ff8bf08e469863efd5f272c09" Namespace="calico-system" Pod="calico-kube-controllers-5ddd8b55c8-kbtkg" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-calico--kube--controllers--5ddd8b55c8--kbtkg-eth0" Nov 1 01:57:56.123368 env[1679]: 2025-11-01 01:57:55.669 [INFO][4919] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="26940886dcc506224697f55b91ba12b68a01c40ff8bf08e469863efd5f272c09" HandleID="k8s-pod-network.26940886dcc506224697f55b91ba12b68a01c40ff8bf08e469863efd5f272c09" Workload="ci--3510.3.8--n--0f05b56927-k8s-calico--kube--controllers--5ddd8b55c8--kbtkg-eth0" Nov 1 01:57:56.123368 env[1679]: 2025-11-01 01:57:55.669 [INFO][4919] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="26940886dcc506224697f55b91ba12b68a01c40ff8bf08e469863efd5f272c09" HandleID="k8s-pod-network.26940886dcc506224697f55b91ba12b68a01c40ff8bf08e469863efd5f272c09" Workload="ci--3510.3.8--n--0f05b56927-k8s-calico--kube--controllers--5ddd8b55c8--kbtkg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000690f30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-0f05b56927", "pod":"calico-kube-controllers-5ddd8b55c8-kbtkg", "timestamp":"2025-11-01 01:57:55.66930523 +0000 UTC"}, Hostname:"ci-3510.3.8-n-0f05b56927", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:57:56.123368 env[1679]: 2025-11-01 01:57:55.670 [INFO][4919] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:57:56.123368 env[1679]: 2025-11-01 01:57:55.824 [INFO][4919] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:57:56.123368 env[1679]: 2025-11-01 01:57:55.824 [INFO][4919] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-0f05b56927' Nov 1 01:57:56.123368 env[1679]: 2025-11-01 01:57:55.828 [INFO][4919] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.26940886dcc506224697f55b91ba12b68a01c40ff8bf08e469863efd5f272c09" host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:56.123368 env[1679]: 2025-11-01 01:57:55.872 [INFO][4919] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:56.123368 env[1679]: 2025-11-01 01:57:55.955 [INFO][4919] ipam/ipam.go 511: Trying affinity for 192.168.3.192/26 host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:56.123368 env[1679]: 2025-11-01 01:57:55.981 [INFO][4919] ipam/ipam.go 158: Attempting to load block cidr=192.168.3.192/26 host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:56.123368 env[1679]: 2025-11-01 01:57:55.983 [INFO][4919] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.3.192/26 host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:56.123368 env[1679]: 2025-11-01 01:57:55.983 [INFO][4919] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.3.192/26 handle="k8s-pod-network.26940886dcc506224697f55b91ba12b68a01c40ff8bf08e469863efd5f272c09" host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:56.123368 env[1679]: 2025-11-01 01:57:55.984 [INFO][4919] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.26940886dcc506224697f55b91ba12b68a01c40ff8bf08e469863efd5f272c09 Nov 1 01:57:56.123368 env[1679]: 2025-11-01 01:57:55.987 [INFO][4919] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.3.192/26 handle="k8s-pod-network.26940886dcc506224697f55b91ba12b68a01c40ff8bf08e469863efd5f272c09" host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:56.123368 env[1679]: 2025-11-01 01:57:55.991 [INFO][4919] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.3.196/26] block=192.168.3.192/26 handle="k8s-pod-network.26940886dcc506224697f55b91ba12b68a01c40ff8bf08e469863efd5f272c09" host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:56.123368 env[1679]: 2025-11-01 01:57:55.991 [INFO][4919] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.3.196/26] handle="k8s-pod-network.26940886dcc506224697f55b91ba12b68a01c40ff8bf08e469863efd5f272c09" host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:56.123368 env[1679]: 2025-11-01 01:57:55.991 [INFO][4919] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:57:56.123368 env[1679]: 2025-11-01 01:57:55.991 [INFO][4919] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.3.196/26] IPv6=[] ContainerID="26940886dcc506224697f55b91ba12b68a01c40ff8bf08e469863efd5f272c09" HandleID="k8s-pod-network.26940886dcc506224697f55b91ba12b68a01c40ff8bf08e469863efd5f272c09" Workload="ci--3510.3.8--n--0f05b56927-k8s-calico--kube--controllers--5ddd8b55c8--kbtkg-eth0" Nov 1 01:57:56.123792 env[1679]: 2025-11-01 01:57:55.992 [INFO][4873] cni-plugin/k8s.go 418: Populated endpoint ContainerID="26940886dcc506224697f55b91ba12b68a01c40ff8bf08e469863efd5f272c09" Namespace="calico-system" Pod="calico-kube-controllers-5ddd8b55c8-kbtkg" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-calico--kube--controllers--5ddd8b55c8--kbtkg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0f05b56927-k8s-calico--kube--controllers--5ddd8b55c8--kbtkg-eth0", GenerateName:"calico-kube-controllers-5ddd8b55c8-", Namespace:"calico-system", SelfLink:"", UID:"792abee8-a81f-4cb1-9ede-47798a35f0b4", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 57, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5ddd8b55c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0f05b56927", ContainerID:"", Pod:"calico-kube-controllers-5ddd8b55c8-kbtkg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.3.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid3a6d4e0f08", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:57:56.123792 env[1679]: 2025-11-01 01:57:55.992 [INFO][4873] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.3.196/32] ContainerID="26940886dcc506224697f55b91ba12b68a01c40ff8bf08e469863efd5f272c09" Namespace="calico-system" Pod="calico-kube-controllers-5ddd8b55c8-kbtkg" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-calico--kube--controllers--5ddd8b55c8--kbtkg-eth0" Nov 1 01:57:56.123792 env[1679]: 2025-11-01 01:57:55.992 [INFO][4873] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid3a6d4e0f08 ContainerID="26940886dcc506224697f55b91ba12b68a01c40ff8bf08e469863efd5f272c09" Namespace="calico-system" Pod="calico-kube-controllers-5ddd8b55c8-kbtkg" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-calico--kube--controllers--5ddd8b55c8--kbtkg-eth0" Nov 1 01:57:56.123792 env[1679]: 2025-11-01 01:57:56.102 [INFO][4873] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="26940886dcc506224697f55b91ba12b68a01c40ff8bf08e469863efd5f272c09" Namespace="calico-system" Pod="calico-kube-controllers-5ddd8b55c8-kbtkg" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-calico--kube--controllers--5ddd8b55c8--kbtkg-eth0" Nov 1 01:57:56.123792 env[1679]: 2025-11-01 01:57:56.102 [INFO][4873] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="26940886dcc506224697f55b91ba12b68a01c40ff8bf08e469863efd5f272c09" Namespace="calico-system" Pod="calico-kube-controllers-5ddd8b55c8-kbtkg" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-calico--kube--controllers--5ddd8b55c8--kbtkg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0f05b56927-k8s-calico--kube--controllers--5ddd8b55c8--kbtkg-eth0", GenerateName:"calico-kube-controllers-5ddd8b55c8-", Namespace:"calico-system", SelfLink:"", UID:"792abee8-a81f-4cb1-9ede-47798a35f0b4", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 57, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5ddd8b55c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0f05b56927", ContainerID:"26940886dcc506224697f55b91ba12b68a01c40ff8bf08e469863efd5f272c09", Pod:"calico-kube-controllers-5ddd8b55c8-kbtkg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.3.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid3a6d4e0f08", MAC:"7e:66:26:95:8f:8d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:57:56.123792 env[1679]: 2025-11-01 01:57:56.122 [INFO][4873] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="26940886dcc506224697f55b91ba12b68a01c40ff8bf08e469863efd5f272c09" Namespace="calico-system" Pod="calico-kube-controllers-5ddd8b55c8-kbtkg" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-calico--kube--controllers--5ddd8b55c8--kbtkg-eth0" Nov 1 01:57:56.127826 env[1679]: time="2025-11-01T01:57:56.127758840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:57:56.127826 env[1679]: time="2025-11-01T01:57:56.127781029Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:57:56.127826 env[1679]: time="2025-11-01T01:57:56.127788029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:57:56.127954 env[1679]: time="2025-11-01T01:57:56.127914007Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/26940886dcc506224697f55b91ba12b68a01c40ff8bf08e469863efd5f272c09 pid=4997 runtime=io.containerd.runc.v2 Nov 1 01:57:56.109000 audit[4985]: NETFILTER_CFG table=filter:112 family=2 entries=46 op=nft_register_chain pid=4985 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 01:57:56.215052 kernel: audit: type=1325 audit(1761962276.109:393): table=filter:112 family=2 entries=46 op=nft_register_chain pid=4985 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 01:57:56.109000 audit[4985]: SYSCALL arch=c000003e syscall=46 success=yes exit=23740 a0=3 a1=7ffd46e3a400 a2=0 a3=7ffd46e3a3ec items=0 ppid=4390 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:56.109000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 01:57:56.231000 audit[5032]: NETFILTER_CFG table=filter:113 family=2 entries=44 op=nft_register_chain pid=5032 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 01:57:56.231000 audit[5032]: SYSCALL arch=c000003e syscall=46 success=yes exit=21952 a0=3 a1=7fff73189880 a2=0 a3=7fff7318986c items=0 ppid=4390 pid=5032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:56.231000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 01:57:56.232413 env[1679]: time="2025-11-01T01:57:56.232393862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-spzcr,Uid:183529c2-fd5c-4a2e-b002-133e45559e04,Namespace:kube-system,Attempt:1,} returns sandbox id \"17acacc99b88c70c5549b73d5c9f497f61cf120f12787cf7cd20725c16309207\"" Nov 1 01:57:56.233555 env[1679]: time="2025-11-01T01:57:56.233539795Z" level=info msg="CreateContainer within sandbox \"17acacc99b88c70c5549b73d5c9f497f61cf120f12787cf7cd20725c16309207\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 01:57:56.239350 env[1679]: time="2025-11-01T01:57:56.239315904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5ddd8b55c8-kbtkg,Uid:792abee8-a81f-4cb1-9ede-47798a35f0b4,Namespace:calico-system,Attempt:1,} returns sandbox id \"26940886dcc506224697f55b91ba12b68a01c40ff8bf08e469863efd5f272c09\"" Nov 1 01:57:56.240054 env[1679]: time="2025-11-01T01:57:56.240040072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 01:57:56.242709 env[1679]: time="2025-11-01T01:57:56.242671144Z" level=info msg="CreateContainer within sandbox \"17acacc99b88c70c5549b73d5c9f497f61cf120f12787cf7cd20725c16309207\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"243da7010047bc9ea5815d6c55acf6597ba81d5db01475a2bc27b47fb1027fc2\"" Nov 1 01:57:56.242918 env[1679]: time="2025-11-01T01:57:56.242872719Z" level=info msg="StartContainer for \"243da7010047bc9ea5815d6c55acf6597ba81d5db01475a2bc27b47fb1027fc2\"" Nov 1 01:57:56.263340 env[1679]: time="2025-11-01T01:57:56.263302000Z" level=info msg="StartContainer for \"243da7010047bc9ea5815d6c55acf6597ba81d5db01475a2bc27b47fb1027fc2\" returns successfully" Nov 1 01:57:56.579745 env[1679]: time="2025-11-01T01:57:56.579595159Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:57:56.580862 env[1679]: time="2025-11-01T01:57:56.580715231Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 01:57:56.581344 kubelet[2679]: E1101 01:57:56.581220 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:57:56.582084 kubelet[2679]: E1101 01:57:56.581353 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:57:56.582084 kubelet[2679]: E1101 01:57:56.581656 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rk5mm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5ddd8b55c8-kbtkg_calico-system(792abee8-a81f-4cb1-9ede-47798a35f0b4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 01:57:56.583131 kubelet[2679]: E1101 01:57:56.583013 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ddd8b55c8-kbtkg" podUID="792abee8-a81f-4cb1-9ede-47798a35f0b4" Nov 1 01:57:56.652034 kubelet[2679]: E1101 01:57:56.651910 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ddd8b55c8-kbtkg" podUID="792abee8-a81f-4cb1-9ede-47798a35f0b4" Nov 1 01:57:56.652034 kubelet[2679]: E1101 01:57:56.651931 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bm44l" podUID="6bd4bd36-d549-4194-a331-51709a095bb2" Nov 1 01:57:56.677521 kubelet[2679]: I1101 01:57:56.677410 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-spzcr" podStartSLOduration=37.677372362 podStartE2EDuration="37.677372362s" podCreationTimestamp="2025-11-01 01:57:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:57:56.676301797 +0000 UTC m=+42.303938196" watchObservedRunningTime="2025-11-01 01:57:56.677372362 +0000 UTC m=+42.305008726" Nov 1 01:57:56.702000 audit[5095]: NETFILTER_CFG table=filter:114 family=2 entries=20 op=nft_register_rule pid=5095 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:57:56.702000 audit[5095]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe4e9524b0 a2=0 a3=7ffe4e95249c items=0 ppid=2820 pid=5095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:56.702000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:57:56.711000 audit[5095]: NETFILTER_CFG table=nat:115 family=2 entries=14 op=nft_register_rule pid=5095 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:57:56.711000 audit[5095]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe4e9524b0 a2=0 a3=0 items=0 ppid=2820 pid=5095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:56.711000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:57:56.755000 audit[5097]: NETFILTER_CFG table=filter:116 family=2 entries=17 op=nft_register_rule pid=5097 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:57:56.755000 audit[5097]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd20fb3e90 a2=0 a3=7ffd20fb3e7c items=0 ppid=2820 pid=5097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:56.755000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:57:56.763000 audit[5097]: NETFILTER_CFG table=nat:117 family=2 entries=35 op=nft_register_chain pid=5097 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:57:56.763000 audit[5097]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffd20fb3e90 a2=0 a3=7ffd20fb3e7c items=0 ppid=2820 pid=5097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:56.763000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:57:57.134539 systemd-networkd[1345]: calid3a6d4e0f08: Gained IPv6LL Nov 1 01:57:57.513005 env[1679]: time="2025-11-01T01:57:57.512896432Z" level=info msg="StopPodSandbox for \"a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a\"" Nov 1 01:57:57.513514 env[1679]: time="2025-11-01T01:57:57.512912712Z" level=info msg="StopPodSandbox for \"3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695\"" Nov 1 01:57:57.602599 env[1679]: 2025-11-01 01:57:57.573 [INFO][5127] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a" Nov 1 01:57:57.602599 env[1679]: 2025-11-01 01:57:57.573 [INFO][5127] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a" iface="eth0" netns="/var/run/netns/cni-62f87cc1-8823-dcad-5a64-1123e8a9fea8" Nov 1 01:57:57.602599 env[1679]: 2025-11-01 01:57:57.574 [INFO][5127] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a" iface="eth0" netns="/var/run/netns/cni-62f87cc1-8823-dcad-5a64-1123e8a9fea8" Nov 1 01:57:57.602599 env[1679]: 2025-11-01 01:57:57.574 [INFO][5127] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a" iface="eth0" netns="/var/run/netns/cni-62f87cc1-8823-dcad-5a64-1123e8a9fea8" Nov 1 01:57:57.602599 env[1679]: 2025-11-01 01:57:57.574 [INFO][5127] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a" Nov 1 01:57:57.602599 env[1679]: 2025-11-01 01:57:57.574 [INFO][5127] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a" Nov 1 01:57:57.602599 env[1679]: 2025-11-01 01:57:57.589 [INFO][5163] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a" HandleID="k8s-pod-network.a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a" Workload="ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--msb77-eth0" Nov 1 01:57:57.602599 env[1679]: 2025-11-01 01:57:57.589 [INFO][5163] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:57:57.602599 env[1679]: 2025-11-01 01:57:57.589 [INFO][5163] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:57:57.602599 env[1679]: 2025-11-01 01:57:57.594 [WARNING][5163] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a" HandleID="k8s-pod-network.a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a" Workload="ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--msb77-eth0" Nov 1 01:57:57.602599 env[1679]: 2025-11-01 01:57:57.594 [INFO][5163] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a" HandleID="k8s-pod-network.a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a" Workload="ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--msb77-eth0" Nov 1 01:57:57.602599 env[1679]: 2025-11-01 01:57:57.596 [INFO][5163] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:57:57.602599 env[1679]: 2025-11-01 01:57:57.599 [INFO][5127] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a" Nov 1 01:57:57.604925 env[1679]: time="2025-11-01T01:57:57.602934094Z" level=info msg="TearDown network for sandbox \"a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a\" successfully" Nov 1 01:57:57.604925 env[1679]: time="2025-11-01T01:57:57.603033621Z" level=info msg="StopPodSandbox for \"a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a\" returns successfully" Nov 1 01:57:57.604925 env[1679]: time="2025-11-01T01:57:57.604529872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fbf49c57b-msb77,Uid:66ab6902-4483-4337-8905-71710abec0d5,Namespace:calico-apiserver,Attempt:1,}" Nov 1 01:57:57.612348 systemd[1]: run-netns-cni\x2d62f87cc1\x2d8823\x2ddcad\x2d5a64\x2d1123e8a9fea8.mount: Deactivated successfully. Nov 1 01:57:57.621500 env[1679]: 2025-11-01 01:57:57.574 [INFO][5126] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695" Nov 1 01:57:57.621500 env[1679]: 2025-11-01 01:57:57.574 [INFO][5126] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695" iface="eth0" netns="/var/run/netns/cni-2626b841-fcca-c689-87d3-226004301654" Nov 1 01:57:57.621500 env[1679]: 2025-11-01 01:57:57.574 [INFO][5126] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695" iface="eth0" netns="/var/run/netns/cni-2626b841-fcca-c689-87d3-226004301654" Nov 1 01:57:57.621500 env[1679]: 2025-11-01 01:57:57.574 [INFO][5126] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695" iface="eth0" netns="/var/run/netns/cni-2626b841-fcca-c689-87d3-226004301654" Nov 1 01:57:57.621500 env[1679]: 2025-11-01 01:57:57.574 [INFO][5126] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695" Nov 1 01:57:57.621500 env[1679]: 2025-11-01 01:57:57.574 [INFO][5126] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695" Nov 1 01:57:57.621500 env[1679]: 2025-11-01 01:57:57.589 [INFO][5165] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695" HandleID="k8s-pod-network.3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695" Workload="ci--3510.3.8--n--0f05b56927-k8s-csi--node--driver--4r6nm-eth0" Nov 1 01:57:57.621500 env[1679]: 2025-11-01 01:57:57.589 [INFO][5165] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:57:57.621500 env[1679]: 2025-11-01 01:57:57.596 [INFO][5165] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:57:57.621500 env[1679]: 2025-11-01 01:57:57.615 [WARNING][5165] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695" HandleID="k8s-pod-network.3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695" Workload="ci--3510.3.8--n--0f05b56927-k8s-csi--node--driver--4r6nm-eth0" Nov 1 01:57:57.621500 env[1679]: 2025-11-01 01:57:57.615 [INFO][5165] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695" HandleID="k8s-pod-network.3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695" Workload="ci--3510.3.8--n--0f05b56927-k8s-csi--node--driver--4r6nm-eth0" Nov 1 01:57:57.621500 env[1679]: 2025-11-01 01:57:57.617 [INFO][5165] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:57:57.621500 env[1679]: 2025-11-01 01:57:57.619 [INFO][5126] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695" Nov 1 01:57:57.622124 env[1679]: time="2025-11-01T01:57:57.621673054Z" level=info msg="TearDown network for sandbox \"3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695\" successfully" Nov 1 01:57:57.622124 env[1679]: time="2025-11-01T01:57:57.621720889Z" level=info msg="StopPodSandbox for \"3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695\" returns successfully" Nov 1 01:57:57.622510 env[1679]: time="2025-11-01T01:57:57.622450334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4r6nm,Uid:66d2f097-1517-44b9-891a-35d40c5f36ae,Namespace:calico-system,Attempt:1,}" Nov 1 01:57:57.625915 systemd[1]: run-netns-cni\x2d2626b841\x2dfcca\x2dc689\x2d87d3\x2d226004301654.mount: Deactivated successfully. Nov 1 01:57:57.651972 kubelet[2679]: E1101 01:57:57.651945 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ddd8b55c8-kbtkg" podUID="792abee8-a81f-4cb1-9ede-47798a35f0b4" Nov 1 01:57:57.692793 systemd-networkd[1345]: cali7ae53e99337: Link UP Nov 1 01:57:57.755099 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 01:57:57.755189 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali7ae53e99337: link becomes ready Nov 1 01:57:57.755463 systemd-networkd[1345]: cali7ae53e99337: Gained carrier Nov 1 01:57:57.755633 systemd-networkd[1345]: cali759051e6e86: Gained IPv6LL Nov 1 01:57:57.784849 env[1679]: 2025-11-01 01:57:57.644 [INFO][5198] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--msb77-eth0 calico-apiserver-fbf49c57b- calico-apiserver 66ab6902-4483-4337-8905-71710abec0d5 950 0 2025-11-01 01:57:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:fbf49c57b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.8-n-0f05b56927 calico-apiserver-fbf49c57b-msb77 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7ae53e99337 [] [] }} ContainerID="1d8c65593819bf9e26f9d5d6b411232c0216b1614a12c68cfc5505d454e71269" Namespace="calico-apiserver" Pod="calico-apiserver-fbf49c57b-msb77" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--msb77-" Nov 1 01:57:57.784849 env[1679]: 2025-11-01 01:57:57.644 [INFO][5198] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1d8c65593819bf9e26f9d5d6b411232c0216b1614a12c68cfc5505d454e71269" Namespace="calico-apiserver" Pod="calico-apiserver-fbf49c57b-msb77" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--msb77-eth0" Nov 1 01:57:57.784849 env[1679]: 2025-11-01 01:57:57.659 [INFO][5245] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1d8c65593819bf9e26f9d5d6b411232c0216b1614a12c68cfc5505d454e71269" HandleID="k8s-pod-network.1d8c65593819bf9e26f9d5d6b411232c0216b1614a12c68cfc5505d454e71269" Workload="ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--msb77-eth0" Nov 1 01:57:57.784849 env[1679]: 2025-11-01 01:57:57.659 [INFO][5245] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1d8c65593819bf9e26f9d5d6b411232c0216b1614a12c68cfc5505d454e71269" HandleID="k8s-pod-network.1d8c65593819bf9e26f9d5d6b411232c0216b1614a12c68cfc5505d454e71269" Workload="ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--msb77-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004eb80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.8-n-0f05b56927", "pod":"calico-apiserver-fbf49c57b-msb77", "timestamp":"2025-11-01 01:57:57.659697266 +0000 UTC"}, Hostname:"ci-3510.3.8-n-0f05b56927", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:57:57.784849 env[1679]: 2025-11-01 01:57:57.660 [INFO][5245] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:57:57.784849 env[1679]: 2025-11-01 01:57:57.660 [INFO][5245] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:57:57.784849 env[1679]: 2025-11-01 01:57:57.660 [INFO][5245] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-0f05b56927' Nov 1 01:57:57.784849 env[1679]: 2025-11-01 01:57:57.664 [INFO][5245] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1d8c65593819bf9e26f9d5d6b411232c0216b1614a12c68cfc5505d454e71269" host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:57.784849 env[1679]: 2025-11-01 01:57:57.667 [INFO][5245] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:57.784849 env[1679]: 2025-11-01 01:57:57.670 [INFO][5245] ipam/ipam.go 511: Trying affinity for 192.168.3.192/26 host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:57.784849 env[1679]: 2025-11-01 01:57:57.672 [INFO][5245] ipam/ipam.go 158: Attempting to load block cidr=192.168.3.192/26 host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:57.784849 env[1679]: 2025-11-01 01:57:57.673 [INFO][5245] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.3.192/26 host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:57.784849 env[1679]: 2025-11-01 01:57:57.673 [INFO][5245] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.3.192/26 handle="k8s-pod-network.1d8c65593819bf9e26f9d5d6b411232c0216b1614a12c68cfc5505d454e71269" host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:57.784849 env[1679]: 2025-11-01 01:57:57.674 [INFO][5245] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1d8c65593819bf9e26f9d5d6b411232c0216b1614a12c68cfc5505d454e71269 Nov 1 01:57:57.784849 env[1679]: 2025-11-01 01:57:57.677 [INFO][5245] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.3.192/26 handle="k8s-pod-network.1d8c65593819bf9e26f9d5d6b411232c0216b1614a12c68cfc5505d454e71269" host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:57.784849 env[1679]: 2025-11-01 01:57:57.683 [INFO][5245] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.3.197/26] block=192.168.3.192/26 handle="k8s-pod-network.1d8c65593819bf9e26f9d5d6b411232c0216b1614a12c68cfc5505d454e71269" host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:57.784849 env[1679]: 2025-11-01 01:57:57.683 [INFO][5245] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.3.197/26] handle="k8s-pod-network.1d8c65593819bf9e26f9d5d6b411232c0216b1614a12c68cfc5505d454e71269" host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:57.784849 env[1679]: 2025-11-01 01:57:57.683 [INFO][5245] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:57:57.784849 env[1679]: 2025-11-01 01:57:57.683 [INFO][5245] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.3.197/26] IPv6=[] ContainerID="1d8c65593819bf9e26f9d5d6b411232c0216b1614a12c68cfc5505d454e71269" HandleID="k8s-pod-network.1d8c65593819bf9e26f9d5d6b411232c0216b1614a12c68cfc5505d454e71269" Workload="ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--msb77-eth0" Nov 1 01:57:57.787693 env[1679]: 2025-11-01 01:57:57.687 [INFO][5198] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1d8c65593819bf9e26f9d5d6b411232c0216b1614a12c68cfc5505d454e71269" Namespace="calico-apiserver" Pod="calico-apiserver-fbf49c57b-msb77" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--msb77-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--msb77-eth0", GenerateName:"calico-apiserver-fbf49c57b-", Namespace:"calico-apiserver", SelfLink:"", UID:"66ab6902-4483-4337-8905-71710abec0d5", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 57, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fbf49c57b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0f05b56927", ContainerID:"", Pod:"calico-apiserver-fbf49c57b-msb77", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7ae53e99337", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:57:57.787693 env[1679]: 2025-11-01 01:57:57.688 [INFO][5198] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.3.197/32] ContainerID="1d8c65593819bf9e26f9d5d6b411232c0216b1614a12c68cfc5505d454e71269" Namespace="calico-apiserver" Pod="calico-apiserver-fbf49c57b-msb77" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--msb77-eth0" Nov 1 01:57:57.787693 env[1679]: 2025-11-01 01:57:57.688 [INFO][5198] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7ae53e99337 ContainerID="1d8c65593819bf9e26f9d5d6b411232c0216b1614a12c68cfc5505d454e71269" Namespace="calico-apiserver" Pod="calico-apiserver-fbf49c57b-msb77" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--msb77-eth0" Nov 1 01:57:57.787693 env[1679]: 2025-11-01 01:57:57.755 [INFO][5198] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1d8c65593819bf9e26f9d5d6b411232c0216b1614a12c68cfc5505d454e71269" Namespace="calico-apiserver" Pod="calico-apiserver-fbf49c57b-msb77" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--msb77-eth0" Nov 1 01:57:57.787693 env[1679]: 2025-11-01 01:57:57.755 [INFO][5198] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1d8c65593819bf9e26f9d5d6b411232c0216b1614a12c68cfc5505d454e71269" Namespace="calico-apiserver" Pod="calico-apiserver-fbf49c57b-msb77" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--msb77-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--msb77-eth0", GenerateName:"calico-apiserver-fbf49c57b-", Namespace:"calico-apiserver", SelfLink:"", UID:"66ab6902-4483-4337-8905-71710abec0d5", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 57, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fbf49c57b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0f05b56927", ContainerID:"1d8c65593819bf9e26f9d5d6b411232c0216b1614a12c68cfc5505d454e71269", Pod:"calico-apiserver-fbf49c57b-msb77", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7ae53e99337", MAC:"52:51:c7:aa:e6:f4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:57:57.787693 env[1679]: 2025-11-01 01:57:57.780 [INFO][5198] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1d8c65593819bf9e26f9d5d6b411232c0216b1614a12c68cfc5505d454e71269" Namespace="calico-apiserver" Pod="calico-apiserver-fbf49c57b-msb77" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--msb77-eth0" Nov 1 01:57:57.809468 env[1679]: time="2025-11-01T01:57:57.809257480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:57:57.809468 env[1679]: time="2025-11-01T01:57:57.809389084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:57:57.809468 env[1679]: time="2025-11-01T01:57:57.809451958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:57:57.809915 env[1679]: time="2025-11-01T01:57:57.809747027Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1d8c65593819bf9e26f9d5d6b411232c0216b1614a12c68cfc5505d454e71269 pid=5297 runtime=io.containerd.runc.v2 Nov 1 01:57:57.815000 audit[5307]: NETFILTER_CFG table=filter:118 family=2 entries=68 op=nft_register_chain pid=5307 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 01:57:57.815000 audit[5307]: SYSCALL arch=c000003e syscall=46 success=yes exit=34624 a0=3 a1=7fffe3bd89b0 a2=0 a3=7fffe3bd899c items=0 ppid=4390 pid=5307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:57.815000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 01:57:57.826931 systemd-networkd[1345]: cali94c1e45fb31: Link UP Nov 1 01:57:57.854055 systemd-networkd[1345]: cali94c1e45fb31: Gained carrier Nov 1 01:57:57.854336 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali94c1e45fb31: link becomes ready Nov 1 01:57:57.861167 env[1679]: 2025-11-01 01:57:57.649 [INFO][5214] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--0f05b56927-k8s-csi--node--driver--4r6nm-eth0 csi-node-driver- calico-system 66d2f097-1517-44b9-891a-35d40c5f36ae 951 0 2025-11-01 01:57:30 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-3510.3.8-n-0f05b56927 csi-node-driver-4r6nm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali94c1e45fb31 [] [] }} ContainerID="c537788ec2bde39ea8ec78f190cb44f16b41b1acde61717ff6d0d61a06c97471" Namespace="calico-system" Pod="csi-node-driver-4r6nm" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-csi--node--driver--4r6nm-" Nov 1 01:57:57.861167 env[1679]: 2025-11-01 01:57:57.649 [INFO][5214] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c537788ec2bde39ea8ec78f190cb44f16b41b1acde61717ff6d0d61a06c97471" Namespace="calico-system" Pod="csi-node-driver-4r6nm" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-csi--node--driver--4r6nm-eth0" Nov 1 01:57:57.861167 env[1679]: 2025-11-01 01:57:57.663 [INFO][5255] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c537788ec2bde39ea8ec78f190cb44f16b41b1acde61717ff6d0d61a06c97471" HandleID="k8s-pod-network.c537788ec2bde39ea8ec78f190cb44f16b41b1acde61717ff6d0d61a06c97471" Workload="ci--3510.3.8--n--0f05b56927-k8s-csi--node--driver--4r6nm-eth0" Nov 1 01:57:57.861167 env[1679]: 2025-11-01 01:57:57.663 [INFO][5255] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c537788ec2bde39ea8ec78f190cb44f16b41b1acde61717ff6d0d61a06c97471" HandleID="k8s-pod-network.c537788ec2bde39ea8ec78f190cb44f16b41b1acde61717ff6d0d61a06c97471" Workload="ci--3510.3.8--n--0f05b56927-k8s-csi--node--driver--4r6nm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e9e70), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-0f05b56927", "pod":"csi-node-driver-4r6nm", "timestamp":"2025-11-01 01:57:57.663923432 +0000 UTC"}, Hostname:"ci-3510.3.8-n-0f05b56927", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:57:57.861167 env[1679]: 2025-11-01 01:57:57.664 [INFO][5255] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:57:57.861167 env[1679]: 2025-11-01 01:57:57.683 [INFO][5255] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:57:57.861167 env[1679]: 2025-11-01 01:57:57.683 [INFO][5255] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-0f05b56927' Nov 1 01:57:57.861167 env[1679]: 2025-11-01 01:57:57.765 [INFO][5255] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c537788ec2bde39ea8ec78f190cb44f16b41b1acde61717ff6d0d61a06c97471" host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:57.861167 env[1679]: 2025-11-01 01:57:57.787 [INFO][5255] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:57.861167 env[1679]: 2025-11-01 01:57:57.798 [INFO][5255] ipam/ipam.go 511: Trying affinity for 192.168.3.192/26 host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:57.861167 env[1679]: 2025-11-01 01:57:57.802 [INFO][5255] ipam/ipam.go 158: Attempting to load block cidr=192.168.3.192/26 host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:57.861167 env[1679]: 2025-11-01 01:57:57.808 [INFO][5255] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.3.192/26 host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:57.861167 env[1679]: 2025-11-01 01:57:57.808 [INFO][5255] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.3.192/26 handle="k8s-pod-network.c537788ec2bde39ea8ec78f190cb44f16b41b1acde61717ff6d0d61a06c97471" host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:57.861167 env[1679]: 2025-11-01 01:57:57.811 [INFO][5255] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c537788ec2bde39ea8ec78f190cb44f16b41b1acde61717ff6d0d61a06c97471 Nov 1 01:57:57.861167 env[1679]: 2025-11-01 01:57:57.817 [INFO][5255] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.3.192/26 handle="k8s-pod-network.c537788ec2bde39ea8ec78f190cb44f16b41b1acde61717ff6d0d61a06c97471" host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:57.861167 env[1679]: 2025-11-01 01:57:57.823 [INFO][5255] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.3.198/26] block=192.168.3.192/26 handle="k8s-pod-network.c537788ec2bde39ea8ec78f190cb44f16b41b1acde61717ff6d0d61a06c97471" host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:57.861167 env[1679]: 2025-11-01 01:57:57.823 [INFO][5255] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.3.198/26] handle="k8s-pod-network.c537788ec2bde39ea8ec78f190cb44f16b41b1acde61717ff6d0d61a06c97471" host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:57.861167 env[1679]: 2025-11-01 01:57:57.823 [INFO][5255] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:57:57.861167 env[1679]: 2025-11-01 01:57:57.823 [INFO][5255] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.3.198/26] IPv6=[] ContainerID="c537788ec2bde39ea8ec78f190cb44f16b41b1acde61717ff6d0d61a06c97471" HandleID="k8s-pod-network.c537788ec2bde39ea8ec78f190cb44f16b41b1acde61717ff6d0d61a06c97471" Workload="ci--3510.3.8--n--0f05b56927-k8s-csi--node--driver--4r6nm-eth0" Nov 1 01:57:57.861878 env[1679]: 2025-11-01 01:57:57.825 [INFO][5214] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c537788ec2bde39ea8ec78f190cb44f16b41b1acde61717ff6d0d61a06c97471" Namespace="calico-system" Pod="csi-node-driver-4r6nm" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-csi--node--driver--4r6nm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0f05b56927-k8s-csi--node--driver--4r6nm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"66d2f097-1517-44b9-891a-35d40c5f36ae", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 57, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0f05b56927", ContainerID:"", Pod:"csi-node-driver-4r6nm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.3.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali94c1e45fb31", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:57:57.861878 env[1679]: 2025-11-01 01:57:57.825 [INFO][5214] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.3.198/32] ContainerID="c537788ec2bde39ea8ec78f190cb44f16b41b1acde61717ff6d0d61a06c97471" Namespace="calico-system" Pod="csi-node-driver-4r6nm" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-csi--node--driver--4r6nm-eth0" Nov 1 01:57:57.861878 env[1679]: 2025-11-01 01:57:57.825 [INFO][5214] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali94c1e45fb31 ContainerID="c537788ec2bde39ea8ec78f190cb44f16b41b1acde61717ff6d0d61a06c97471" Namespace="calico-system" Pod="csi-node-driver-4r6nm" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-csi--node--driver--4r6nm-eth0" Nov 1 01:57:57.861878 env[1679]: 2025-11-01 01:57:57.854 [INFO][5214] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c537788ec2bde39ea8ec78f190cb44f16b41b1acde61717ff6d0d61a06c97471" Namespace="calico-system" Pod="csi-node-driver-4r6nm" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-csi--node--driver--4r6nm-eth0" Nov 1 01:57:57.861878 env[1679]: 2025-11-01 01:57:57.854 [INFO][5214] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c537788ec2bde39ea8ec78f190cb44f16b41b1acde61717ff6d0d61a06c97471" Namespace="calico-system" Pod="csi-node-driver-4r6nm" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-csi--node--driver--4r6nm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0f05b56927-k8s-csi--node--driver--4r6nm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"66d2f097-1517-44b9-891a-35d40c5f36ae", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 57, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0f05b56927", ContainerID:"c537788ec2bde39ea8ec78f190cb44f16b41b1acde61717ff6d0d61a06c97471", Pod:"csi-node-driver-4r6nm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.3.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali94c1e45fb31", MAC:"8a:74:15:d1:45:ae", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:57:57.861878 env[1679]: 2025-11-01 01:57:57.859 [INFO][5214] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c537788ec2bde39ea8ec78f190cb44f16b41b1acde61717ff6d0d61a06c97471" Namespace="calico-system" Pod="csi-node-driver-4r6nm" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-csi--node--driver--4r6nm-eth0" Nov 1 01:57:57.866000 env[1679]: time="2025-11-01T01:57:57.865968550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:57:57.866000 env[1679]: time="2025-11-01T01:57:57.865989446Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:57:57.866000 env[1679]: time="2025-11-01T01:57:57.865996681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:57:57.866102 env[1679]: time="2025-11-01T01:57:57.866064526Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c537788ec2bde39ea8ec78f190cb44f16b41b1acde61717ff6d0d61a06c97471 pid=5339 runtime=io.containerd.runc.v2 Nov 1 01:57:57.868000 audit[5350]: NETFILTER_CFG table=filter:119 family=2 entries=48 op=nft_register_chain pid=5350 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 01:57:57.868000 audit[5350]: SYSCALL arch=c000003e syscall=46 success=yes exit=23124 a0=3 a1=7ffee71953c0 a2=0 a3=7ffee71953ac items=0 ppid=4390 pid=5350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:57.868000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 01:57:57.871615 env[1679]: time="2025-11-01T01:57:57.871590419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fbf49c57b-msb77,Uid:66ab6902-4483-4337-8905-71710abec0d5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1d8c65593819bf9e26f9d5d6b411232c0216b1614a12c68cfc5505d454e71269\"" Nov 1 01:57:57.872273 env[1679]: time="2025-11-01T01:57:57.872259438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:57:57.883175 env[1679]: time="2025-11-01T01:57:57.883153862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4r6nm,Uid:66d2f097-1517-44b9-891a-35d40c5f36ae,Namespace:calico-system,Attempt:1,} returns sandbox id \"c537788ec2bde39ea8ec78f190cb44f16b41b1acde61717ff6d0d61a06c97471\"" Nov 1 01:57:58.512833 env[1679]: time="2025-11-01T01:57:58.512735272Z" level=info msg="StopPodSandbox for \"0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71\"" Nov 1 01:57:58.513235 env[1679]: time="2025-11-01T01:57:58.512735402Z" level=info msg="StopPodSandbox for \"9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76\"" Nov 1 01:57:58.650048 env[1679]: time="2025-11-01T01:57:58.649995852Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:57:58.650512 env[1679]: time="2025-11-01T01:57:58.650426742Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:57:58.650671 kubelet[2679]: E1101 01:57:58.650630 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:57:58.650760 kubelet[2679]: E1101 01:57:58.650683 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:57:58.651077 env[1679]: time="2025-11-01T01:57:58.651046692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 01:57:58.651154 kubelet[2679]: E1101 01:57:58.650997 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n9792,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-fbf49c57b-msb77_calico-apiserver(66ab6902-4483-4337-8905-71710abec0d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:57:58.652211 kubelet[2679]: E1101 01:57:58.652166 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-msb77" podUID="66ab6902-4483-4337-8905-71710abec0d5" Nov 1 01:57:58.652608 env[1679]: 2025-11-01 01:57:58.598 [INFO][5407] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71" Nov 1 01:57:58.652608 env[1679]: 2025-11-01 01:57:58.599 [INFO][5407] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71" iface="eth0" netns="/var/run/netns/cni-2ab03788-6df4-b672-5a22-fc7a5ab09529" Nov 1 01:57:58.652608 env[1679]: 2025-11-01 01:57:58.599 [INFO][5407] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71" iface="eth0" netns="/var/run/netns/cni-2ab03788-6df4-b672-5a22-fc7a5ab09529" Nov 1 01:57:58.652608 env[1679]: 2025-11-01 01:57:58.600 [INFO][5407] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71" iface="eth0" netns="/var/run/netns/cni-2ab03788-6df4-b672-5a22-fc7a5ab09529" Nov 1 01:57:58.652608 env[1679]: 2025-11-01 01:57:58.600 [INFO][5407] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71" Nov 1 01:57:58.652608 env[1679]: 2025-11-01 01:57:58.600 [INFO][5407] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71" Nov 1 01:57:58.652608 env[1679]: 2025-11-01 01:57:58.640 [INFO][5441] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71" HandleID="k8s-pod-network.0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71" Workload="ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--rfczw-eth0" Nov 1 01:57:58.652608 env[1679]: 2025-11-01 01:57:58.640 [INFO][5441] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:57:58.652608 env[1679]: 2025-11-01 01:57:58.640 [INFO][5441] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:57:58.652608 env[1679]: 2025-11-01 01:57:58.647 [WARNING][5441] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71" HandleID="k8s-pod-network.0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71" Workload="ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--rfczw-eth0" Nov 1 01:57:58.652608 env[1679]: 2025-11-01 01:57:58.647 [INFO][5441] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71" HandleID="k8s-pod-network.0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71" Workload="ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--rfczw-eth0" Nov 1 01:57:58.652608 env[1679]: 2025-11-01 01:57:58.649 [INFO][5441] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:57:58.652608 env[1679]: 2025-11-01 01:57:58.650 [INFO][5407] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71" Nov 1 01:57:58.652608 env[1679]: time="2025-11-01T01:57:58.652365668Z" level=info msg="TearDown network for sandbox \"0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71\" successfully" Nov 1 01:57:58.652608 env[1679]: time="2025-11-01T01:57:58.652391326Z" level=info msg="StopPodSandbox for \"0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71\" returns successfully" Nov 1 01:57:58.653483 env[1679]: time="2025-11-01T01:57:58.652843954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rfczw,Uid:af91c5b4-018e-48fd-aa87-a2db911b8a67,Namespace:kube-system,Attempt:1,}" Nov 1 01:57:58.655650 kubelet[2679]: E1101 01:57:58.655614 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-msb77" podUID="66ab6902-4483-4337-8905-71710abec0d5" Nov 1 01:57:58.657077 systemd[1]: run-netns-cni\x2d2ab03788\x2d6df4\x2db672\x2d5a22\x2dfc7a5ab09529.mount: Deactivated successfully. Nov 1 01:57:58.666383 env[1679]: 2025-11-01 01:57:58.599 [INFO][5406] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76" Nov 1 01:57:58.666383 env[1679]: 2025-11-01 01:57:58.599 [INFO][5406] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76" iface="eth0" netns="/var/run/netns/cni-5e96191a-4525-951d-0ed6-f2eed7a0f123" Nov 1 01:57:58.666383 env[1679]: 2025-11-01 01:57:58.599 [INFO][5406] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76" iface="eth0" netns="/var/run/netns/cni-5e96191a-4525-951d-0ed6-f2eed7a0f123" Nov 1 01:57:58.666383 env[1679]: 2025-11-01 01:57:58.600 [INFO][5406] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76" iface="eth0" netns="/var/run/netns/cni-5e96191a-4525-951d-0ed6-f2eed7a0f123" Nov 1 01:57:58.666383 env[1679]: 2025-11-01 01:57:58.600 [INFO][5406] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76" Nov 1 01:57:58.666383 env[1679]: 2025-11-01 01:57:58.600 [INFO][5406] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76" Nov 1 01:57:58.666383 env[1679]: 2025-11-01 01:57:58.640 [INFO][5442] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76" HandleID="k8s-pod-network.9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76" Workload="ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--d5g9p-eth0" Nov 1 01:57:58.666383 env[1679]: 2025-11-01 01:57:58.640 [INFO][5442] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:57:58.666383 env[1679]: 2025-11-01 01:57:58.649 [INFO][5442] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:57:58.666383 env[1679]: 2025-11-01 01:57:58.657 [WARNING][5442] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76" HandleID="k8s-pod-network.9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76" Workload="ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--d5g9p-eth0" Nov 1 01:57:58.666383 env[1679]: 2025-11-01 01:57:58.657 [INFO][5442] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76" HandleID="k8s-pod-network.9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76" Workload="ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--d5g9p-eth0" Nov 1 01:57:58.666383 env[1679]: 2025-11-01 01:57:58.658 [INFO][5442] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:57:58.666383 env[1679]: 2025-11-01 01:57:58.660 [INFO][5406] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76" Nov 1 01:57:58.667291 env[1679]: time="2025-11-01T01:57:58.666494216Z" level=info msg="TearDown network for sandbox \"9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76\" successfully" Nov 1 01:57:58.667291 env[1679]: time="2025-11-01T01:57:58.666527944Z" level=info msg="StopPodSandbox for \"9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76\" returns successfully" Nov 1 01:57:58.667291 env[1679]: time="2025-11-01T01:57:58.667218660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fbf49c57b-d5g9p,Uid:381a5ea3-a9a9-42e2-8c3a-9c0b410afe13,Namespace:calico-apiserver,Attempt:1,}" Nov 1 01:57:58.670948 systemd[1]: run-netns-cni\x2d5e96191a\x2d4525\x2d951d\x2d0ed6\x2df2eed7a0f123.mount: Deactivated successfully. Nov 1 01:57:58.676000 audit[5502]: NETFILTER_CFG table=filter:120 family=2 entries=14 op=nft_register_rule pid=5502 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:57:58.676000 audit[5502]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdeebdf870 a2=0 a3=7ffdeebdf85c items=0 ppid=2820 pid=5502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:58.676000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:57:58.687000 audit[5502]: NETFILTER_CFG table=nat:121 family=2 entries=20 op=nft_register_rule pid=5502 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:57:58.687000 audit[5502]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffdeebdf870 a2=0 a3=7ffdeebdf85c items=0 ppid=2820 pid=5502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:58.687000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:57:58.735558 systemd-networkd[1345]: cali88680f3b3d7: Link UP Nov 1 01:57:58.787090 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 01:57:58.787153 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali88680f3b3d7: link becomes ready Nov 1 01:57:58.787299 systemd-networkd[1345]: cali88680f3b3d7: Gained carrier Nov 1 01:57:58.793760 env[1679]: 2025-11-01 01:57:58.687 [INFO][5474] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--rfczw-eth0 coredns-668d6bf9bc- kube-system af91c5b4-018e-48fd-aa87-a2db911b8a67 966 0 2025-11-01 01:57:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.8-n-0f05b56927 coredns-668d6bf9bc-rfczw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali88680f3b3d7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fef1fe966e9a659a6ef270e96b3bc5d62cecf5ff468b9c7cd4149a9a3d5658cf" Namespace="kube-system" Pod="coredns-668d6bf9bc-rfczw" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--rfczw-" Nov 1 01:57:58.793760 env[1679]: 2025-11-01 01:57:58.687 [INFO][5474] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fef1fe966e9a659a6ef270e96b3bc5d62cecf5ff468b9c7cd4149a9a3d5658cf" Namespace="kube-system" Pod="coredns-668d6bf9bc-rfczw" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--rfczw-eth0" Nov 1 01:57:58.793760 env[1679]: 2025-11-01 01:57:58.702 [INFO][5523] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fef1fe966e9a659a6ef270e96b3bc5d62cecf5ff468b9c7cd4149a9a3d5658cf" HandleID="k8s-pod-network.fef1fe966e9a659a6ef270e96b3bc5d62cecf5ff468b9c7cd4149a9a3d5658cf" Workload="ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--rfczw-eth0" Nov 1 01:57:58.793760 env[1679]: 2025-11-01 01:57:58.702 [INFO][5523] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fef1fe966e9a659a6ef270e96b3bc5d62cecf5ff468b9c7cd4149a9a3d5658cf" HandleID="k8s-pod-network.fef1fe966e9a659a6ef270e96b3bc5d62cecf5ff468b9c7cd4149a9a3d5658cf" Workload="ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--rfczw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f630), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.8-n-0f05b56927", "pod":"coredns-668d6bf9bc-rfczw", "timestamp":"2025-11-01 01:57:58.702812719 +0000 UTC"}, Hostname:"ci-3510.3.8-n-0f05b56927", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:57:58.793760 env[1679]: 2025-11-01 01:57:58.702 [INFO][5523] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:57:58.793760 env[1679]: 2025-11-01 01:57:58.702 [INFO][5523] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:57:58.793760 env[1679]: 2025-11-01 01:57:58.702 [INFO][5523] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-0f05b56927' Nov 1 01:57:58.793760 env[1679]: 2025-11-01 01:57:58.707 [INFO][5523] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fef1fe966e9a659a6ef270e96b3bc5d62cecf5ff468b9c7cd4149a9a3d5658cf" host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:58.793760 env[1679]: 2025-11-01 01:57:58.710 [INFO][5523] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:58.793760 env[1679]: 2025-11-01 01:57:58.713 [INFO][5523] ipam/ipam.go 511: Trying affinity for 192.168.3.192/26 host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:58.793760 env[1679]: 2025-11-01 01:57:58.715 [INFO][5523] ipam/ipam.go 158: Attempting to load block cidr=192.168.3.192/26 host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:58.793760 env[1679]: 2025-11-01 01:57:58.716 [INFO][5523] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.3.192/26 host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:58.793760 env[1679]: 2025-11-01 01:57:58.716 [INFO][5523] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.3.192/26 handle="k8s-pod-network.fef1fe966e9a659a6ef270e96b3bc5d62cecf5ff468b9c7cd4149a9a3d5658cf" host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:58.793760 env[1679]: 2025-11-01 01:57:58.718 [INFO][5523] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fef1fe966e9a659a6ef270e96b3bc5d62cecf5ff468b9c7cd4149a9a3d5658cf Nov 1 01:57:58.793760 env[1679]: 2025-11-01 01:57:58.720 [INFO][5523] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.3.192/26 handle="k8s-pod-network.fef1fe966e9a659a6ef270e96b3bc5d62cecf5ff468b9c7cd4149a9a3d5658cf" host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:58.793760 env[1679]: 2025-11-01 01:57:58.733 [INFO][5523] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.3.199/26] block=192.168.3.192/26 handle="k8s-pod-network.fef1fe966e9a659a6ef270e96b3bc5d62cecf5ff468b9c7cd4149a9a3d5658cf" host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:58.793760 env[1679]: 2025-11-01 01:57:58.733 [INFO][5523] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.3.199/26] handle="k8s-pod-network.fef1fe966e9a659a6ef270e96b3bc5d62cecf5ff468b9c7cd4149a9a3d5658cf" host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:58.793760 env[1679]: 2025-11-01 01:57:58.733 [INFO][5523] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:57:58.793760 env[1679]: 2025-11-01 01:57:58.733 [INFO][5523] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.3.199/26] IPv6=[] ContainerID="fef1fe966e9a659a6ef270e96b3bc5d62cecf5ff468b9c7cd4149a9a3d5658cf" HandleID="k8s-pod-network.fef1fe966e9a659a6ef270e96b3bc5d62cecf5ff468b9c7cd4149a9a3d5658cf" Workload="ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--rfczw-eth0" Nov 1 01:57:58.794185 env[1679]: 2025-11-01 01:57:58.734 [INFO][5474] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fef1fe966e9a659a6ef270e96b3bc5d62cecf5ff468b9c7cd4149a9a3d5658cf" Namespace="kube-system" Pod="coredns-668d6bf9bc-rfczw" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--rfczw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--rfczw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"af91c5b4-018e-48fd-aa87-a2db911b8a67", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 57, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0f05b56927", ContainerID:"", Pod:"coredns-668d6bf9bc-rfczw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali88680f3b3d7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:57:58.794185 env[1679]: 2025-11-01 01:57:58.734 [INFO][5474] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.3.199/32] ContainerID="fef1fe966e9a659a6ef270e96b3bc5d62cecf5ff468b9c7cd4149a9a3d5658cf" Namespace="kube-system" Pod="coredns-668d6bf9bc-rfczw" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--rfczw-eth0" Nov 1 01:57:58.794185 env[1679]: 2025-11-01 01:57:58.734 [INFO][5474] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali88680f3b3d7 ContainerID="fef1fe966e9a659a6ef270e96b3bc5d62cecf5ff468b9c7cd4149a9a3d5658cf" Namespace="kube-system" Pod="coredns-668d6bf9bc-rfczw" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--rfczw-eth0" Nov 1 01:57:58.794185 env[1679]: 2025-11-01 01:57:58.787 [INFO][5474] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fef1fe966e9a659a6ef270e96b3bc5d62cecf5ff468b9c7cd4149a9a3d5658cf" Namespace="kube-system" Pod="coredns-668d6bf9bc-rfczw" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--rfczw-eth0" Nov 1 01:57:58.794185 env[1679]: 2025-11-01 01:57:58.787 [INFO][5474] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fef1fe966e9a659a6ef270e96b3bc5d62cecf5ff468b9c7cd4149a9a3d5658cf" Namespace="kube-system" Pod="coredns-668d6bf9bc-rfczw" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--rfczw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--rfczw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"af91c5b4-018e-48fd-aa87-a2db911b8a67", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 57, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0f05b56927", ContainerID:"fef1fe966e9a659a6ef270e96b3bc5d62cecf5ff468b9c7cd4149a9a3d5658cf", Pod:"coredns-668d6bf9bc-rfczw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali88680f3b3d7", MAC:"0e:35:04:8b:88:c4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:57:58.794185 env[1679]: 2025-11-01 01:57:58.792 [INFO][5474] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fef1fe966e9a659a6ef270e96b3bc5d62cecf5ff468b9c7cd4149a9a3d5658cf" Namespace="kube-system" Pod="coredns-668d6bf9bc-rfczw" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--rfczw-eth0" Nov 1 01:57:58.798351 env[1679]: time="2025-11-01T01:57:58.798296138Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:57:58.798351 env[1679]: time="2025-11-01T01:57:58.798321619Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:57:58.798351 env[1679]: time="2025-11-01T01:57:58.798344379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:57:58.798483 env[1679]: time="2025-11-01T01:57:58.798419925Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fef1fe966e9a659a6ef270e96b3bc5d62cecf5ff468b9c7cd4149a9a3d5658cf pid=5573 runtime=io.containerd.runc.v2 Nov 1 01:57:58.800000 audit[5584]: NETFILTER_CFG table=filter:122 family=2 entries=48 op=nft_register_chain pid=5584 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 01:57:58.800000 audit[5584]: SYSCALL arch=c000003e syscall=46 success=yes exit=22704 a0=3 a1=7ffc30fd1d30 a2=0 a3=7ffc30fd1d1c items=0 ppid=4390 pid=5584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:58.800000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 01:57:58.826592 env[1679]: time="2025-11-01T01:57:58.826563515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rfczw,Uid:af91c5b4-018e-48fd-aa87-a2db911b8a67,Namespace:kube-system,Attempt:1,} returns sandbox id \"fef1fe966e9a659a6ef270e96b3bc5d62cecf5ff468b9c7cd4149a9a3d5658cf\"" Nov 1 01:57:58.827185 systemd-networkd[1345]: calid3b5138066c: Link UP Nov 1 01:57:58.827993 env[1679]: time="2025-11-01T01:57:58.827976419Z" level=info msg="CreateContainer within sandbox \"fef1fe966e9a659a6ef270e96b3bc5d62cecf5ff468b9c7cd4149a9a3d5658cf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 01:57:58.853246 systemd-networkd[1345]: calid3b5138066c: Gained carrier Nov 1 01:57:58.853349 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calid3b5138066c: link becomes ready Nov 1 01:57:58.855550 env[1679]: time="2025-11-01T01:57:58.855525292Z" level=info msg="CreateContainer within sandbox \"fef1fe966e9a659a6ef270e96b3bc5d62cecf5ff468b9c7cd4149a9a3d5658cf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c7ce2df9540f33187292f8e8e714998889fadf9ce24575667f2c54e1546e8f42\"" Nov 1 01:57:58.855847 env[1679]: time="2025-11-01T01:57:58.855834003Z" level=info msg="StartContainer for \"c7ce2df9540f33187292f8e8e714998889fadf9ce24575667f2c54e1546e8f42\"" Nov 1 01:57:58.859974 env[1679]: 2025-11-01 01:57:58.693 [INFO][5491] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--d5g9p-eth0 calico-apiserver-fbf49c57b- calico-apiserver 381a5ea3-a9a9-42e2-8c3a-9c0b410afe13 967 0 2025-11-01 01:57:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:fbf49c57b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.8-n-0f05b56927 calico-apiserver-fbf49c57b-d5g9p eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid3b5138066c [] [] }} ContainerID="53bcc1c78f9d5ad3776c5df08f665b8aac934f2ca1955f452ab49c2978e78c0d" Namespace="calico-apiserver" Pod="calico-apiserver-fbf49c57b-d5g9p" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--d5g9p-" Nov 1 01:57:58.859974 env[1679]: 2025-11-01 01:57:58.693 [INFO][5491] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="53bcc1c78f9d5ad3776c5df08f665b8aac934f2ca1955f452ab49c2978e78c0d" Namespace="calico-apiserver" Pod="calico-apiserver-fbf49c57b-d5g9p" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--d5g9p-eth0" Nov 1 01:57:58.859974 env[1679]: 2025-11-01 01:57:58.708 [INFO][5529] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="53bcc1c78f9d5ad3776c5df08f665b8aac934f2ca1955f452ab49c2978e78c0d" HandleID="k8s-pod-network.53bcc1c78f9d5ad3776c5df08f665b8aac934f2ca1955f452ab49c2978e78c0d" Workload="ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--d5g9p-eth0" Nov 1 01:57:58.859974 env[1679]: 2025-11-01 01:57:58.708 [INFO][5529] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="53bcc1c78f9d5ad3776c5df08f665b8aac934f2ca1955f452ab49c2978e78c0d" HandleID="k8s-pod-network.53bcc1c78f9d5ad3776c5df08f665b8aac934f2ca1955f452ab49c2978e78c0d" Workload="ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--d5g9p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002da320), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.8-n-0f05b56927", "pod":"calico-apiserver-fbf49c57b-d5g9p", "timestamp":"2025-11-01 01:57:58.708146989 +0000 UTC"}, Hostname:"ci-3510.3.8-n-0f05b56927", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:57:58.859974 env[1679]: 2025-11-01 01:57:58.708 [INFO][5529] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:57:58.859974 env[1679]: 2025-11-01 01:57:58.733 [INFO][5529] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:57:58.859974 env[1679]: 2025-11-01 01:57:58.733 [INFO][5529] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-0f05b56927' Nov 1 01:57:58.859974 env[1679]: 2025-11-01 01:57:58.808 [INFO][5529] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.53bcc1c78f9d5ad3776c5df08f665b8aac934f2ca1955f452ab49c2978e78c0d" host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:58.859974 env[1679]: 2025-11-01 01:57:58.811 [INFO][5529] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:58.859974 env[1679]: 2025-11-01 01:57:58.814 [INFO][5529] ipam/ipam.go 511: Trying affinity for 192.168.3.192/26 host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:58.859974 env[1679]: 2025-11-01 01:57:58.815 [INFO][5529] ipam/ipam.go 158: Attempting to load block cidr=192.168.3.192/26 host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:58.859974 env[1679]: 2025-11-01 01:57:58.817 [INFO][5529] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.3.192/26 host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:58.859974 env[1679]: 2025-11-01 01:57:58.817 [INFO][5529] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.3.192/26 handle="k8s-pod-network.53bcc1c78f9d5ad3776c5df08f665b8aac934f2ca1955f452ab49c2978e78c0d" host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:58.859974 env[1679]: 2025-11-01 01:57:58.818 [INFO][5529] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.53bcc1c78f9d5ad3776c5df08f665b8aac934f2ca1955f452ab49c2978e78c0d Nov 1 01:57:58.859974 env[1679]: 2025-11-01 01:57:58.821 [INFO][5529] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.3.192/26 handle="k8s-pod-network.53bcc1c78f9d5ad3776c5df08f665b8aac934f2ca1955f452ab49c2978e78c0d" host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:58.859974 env[1679]: 2025-11-01 01:57:58.825 [INFO][5529] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.3.200/26] block=192.168.3.192/26 handle="k8s-pod-network.53bcc1c78f9d5ad3776c5df08f665b8aac934f2ca1955f452ab49c2978e78c0d" host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:58.859974 env[1679]: 2025-11-01 01:57:58.825 [INFO][5529] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.3.200/26] handle="k8s-pod-network.53bcc1c78f9d5ad3776c5df08f665b8aac934f2ca1955f452ab49c2978e78c0d" host="ci-3510.3.8-n-0f05b56927" Nov 1 01:57:58.859974 env[1679]: 2025-11-01 01:57:58.825 [INFO][5529] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:57:58.859974 env[1679]: 2025-11-01 01:57:58.825 [INFO][5529] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.3.200/26] IPv6=[] ContainerID="53bcc1c78f9d5ad3776c5df08f665b8aac934f2ca1955f452ab49c2978e78c0d" HandleID="k8s-pod-network.53bcc1c78f9d5ad3776c5df08f665b8aac934f2ca1955f452ab49c2978e78c0d" Workload="ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--d5g9p-eth0" Nov 1 01:57:58.860422 env[1679]: 2025-11-01 01:57:58.826 [INFO][5491] cni-plugin/k8s.go 418: Populated endpoint ContainerID="53bcc1c78f9d5ad3776c5df08f665b8aac934f2ca1955f452ab49c2978e78c0d" Namespace="calico-apiserver" Pod="calico-apiserver-fbf49c57b-d5g9p" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--d5g9p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--d5g9p-eth0", GenerateName:"calico-apiserver-fbf49c57b-", Namespace:"calico-apiserver", SelfLink:"", UID:"381a5ea3-a9a9-42e2-8c3a-9c0b410afe13", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 57, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fbf49c57b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0f05b56927", ContainerID:"", Pod:"calico-apiserver-fbf49c57b-d5g9p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid3b5138066c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:57:58.860422 env[1679]: 2025-11-01 01:57:58.826 [INFO][5491] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.3.200/32] ContainerID="53bcc1c78f9d5ad3776c5df08f665b8aac934f2ca1955f452ab49c2978e78c0d" Namespace="calico-apiserver" Pod="calico-apiserver-fbf49c57b-d5g9p" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--d5g9p-eth0" Nov 1 01:57:58.860422 env[1679]: 2025-11-01 01:57:58.826 [INFO][5491] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid3b5138066c ContainerID="53bcc1c78f9d5ad3776c5df08f665b8aac934f2ca1955f452ab49c2978e78c0d" Namespace="calico-apiserver" Pod="calico-apiserver-fbf49c57b-d5g9p" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--d5g9p-eth0" Nov 1 01:57:58.860422 env[1679]: 2025-11-01 01:57:58.853 [INFO][5491] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="53bcc1c78f9d5ad3776c5df08f665b8aac934f2ca1955f452ab49c2978e78c0d" Namespace="calico-apiserver" Pod="calico-apiserver-fbf49c57b-d5g9p" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--d5g9p-eth0" Nov 1 01:57:58.860422 env[1679]: 2025-11-01 01:57:58.853 [INFO][5491] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="53bcc1c78f9d5ad3776c5df08f665b8aac934f2ca1955f452ab49c2978e78c0d" Namespace="calico-apiserver" Pod="calico-apiserver-fbf49c57b-d5g9p" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--d5g9p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--d5g9p-eth0", GenerateName:"calico-apiserver-fbf49c57b-", Namespace:"calico-apiserver", SelfLink:"", UID:"381a5ea3-a9a9-42e2-8c3a-9c0b410afe13", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 57, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fbf49c57b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0f05b56927", ContainerID:"53bcc1c78f9d5ad3776c5df08f665b8aac934f2ca1955f452ab49c2978e78c0d", Pod:"calico-apiserver-fbf49c57b-d5g9p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid3b5138066c", MAC:"22:c7:77:40:dd:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:57:58.860422 env[1679]: 2025-11-01 01:57:58.858 [INFO][5491] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="53bcc1c78f9d5ad3776c5df08f665b8aac934f2ca1955f452ab49c2978e78c0d" Namespace="calico-apiserver" Pod="calico-apiserver-fbf49c57b-d5g9p" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--d5g9p-eth0" Nov 1 01:57:58.864895 env[1679]: time="2025-11-01T01:57:58.864859662Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:57:58.864895 env[1679]: time="2025-11-01T01:57:58.864882559Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:57:58.864895 env[1679]: time="2025-11-01T01:57:58.864890485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:57:58.865054 env[1679]: time="2025-11-01T01:57:58.864962513Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/53bcc1c78f9d5ad3776c5df08f665b8aac934f2ca1955f452ab49c2978e78c0d pid=5637 runtime=io.containerd.runc.v2 Nov 1 01:57:58.867000 audit[5649]: NETFILTER_CFG table=filter:123 family=2 entries=63 op=nft_register_chain pid=5649 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 01:57:58.867000 audit[5649]: SYSCALL arch=c000003e syscall=46 success=yes exit=30664 a0=3 a1=7ffd2e2a6560 a2=0 a3=7ffd2e2a654c items=0 ppid=4390 pid=5649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:58.867000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 01:57:58.877587 env[1679]: time="2025-11-01T01:57:58.877544091Z" level=info msg="StartContainer for \"c7ce2df9540f33187292f8e8e714998889fadf9ce24575667f2c54e1546e8f42\" returns successfully" Nov 1 01:57:58.894768 env[1679]: time="2025-11-01T01:57:58.894741505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fbf49c57b-d5g9p,Uid:381a5ea3-a9a9-42e2-8c3a-9c0b410afe13,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"53bcc1c78f9d5ad3776c5df08f665b8aac934f2ca1955f452ab49c2978e78c0d\"" Nov 1 01:57:58.992188 env[1679]: time="2025-11-01T01:57:58.992091899Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:57:58.993084 env[1679]: time="2025-11-01T01:57:58.992933139Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 01:57:58.993471 kubelet[2679]: E1101 01:57:58.993317 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:57:58.993744 kubelet[2679]: E1101 01:57:58.993464 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:57:58.994122 kubelet[2679]: E1101 01:57:58.993978 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wwdwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4r6nm_calico-system(66d2f097-1517-44b9-891a-35d40c5f36ae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 01:57:58.994603 env[1679]: time="2025-11-01T01:57:58.994189457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:57:59.054563 systemd-networkd[1345]: cali94c1e45fb31: Gained IPv6LL Nov 1 01:57:59.351301 env[1679]: time="2025-11-01T01:57:59.351044821Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:57:59.366946 env[1679]: time="2025-11-01T01:57:59.366783882Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:57:59.367225 kubelet[2679]: E1101 01:57:59.367146 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:57:59.367415 kubelet[2679]: E1101 01:57:59.367244 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:57:59.367844 kubelet[2679]: E1101 01:57:59.367641 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-shhx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-fbf49c57b-d5g9p_calico-apiserver(381a5ea3-a9a9-42e2-8c3a-9c0b410afe13): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:57:59.368375 env[1679]: time="2025-11-01T01:57:59.367976557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 01:57:59.369218 kubelet[2679]: E1101 01:57:59.369147 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-d5g9p" podUID="381a5ea3-a9a9-42e2-8c3a-9c0b410afe13" Nov 1 01:57:59.664274 kubelet[2679]: E1101 01:57:59.664067 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-d5g9p" podUID="381a5ea3-a9a9-42e2-8c3a-9c0b410afe13" Nov 1 01:57:59.667930 kubelet[2679]: E1101 01:57:59.667838 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-msb77" podUID="66ab6902-4483-4337-8905-71710abec0d5" Nov 1 01:57:59.693567 systemd-networkd[1345]: cali7ae53e99337: Gained IPv6LL Nov 1 01:57:59.707372 kubelet[2679]: I1101 01:57:59.707275 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rfczw" podStartSLOduration=40.707242188 podStartE2EDuration="40.707242188s" podCreationTimestamp="2025-11-01 01:57:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:57:59.706530705 +0000 UTC m=+45.334167047" watchObservedRunningTime="2025-11-01 01:57:59.707242188 +0000 UTC m=+45.334878522" Nov 1 01:57:59.708000 audit[5703]: NETFILTER_CFG table=filter:124 family=2 entries=14 op=nft_register_rule pid=5703 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:57:59.708000 audit[5703]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe6bcf3f70 a2=0 a3=7ffe6bcf3f5c items=0 ppid=2820 pid=5703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:59.708000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:57:59.725000 audit[5703]: NETFILTER_CFG table=nat:125 family=2 entries=20 op=nft_register_rule pid=5703 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:57:59.725000 audit[5703]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe6bcf3f70 a2=0 a3=7ffe6bcf3f5c items=0 ppid=2820 pid=5703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:59.725000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:57:59.728824 env[1679]: time="2025-11-01T01:57:59.728778861Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:57:59.729423 env[1679]: time="2025-11-01T01:57:59.729345187Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 01:57:59.729620 kubelet[2679]: E1101 01:57:59.729552 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:57:59.729620 kubelet[2679]: E1101 01:57:59.729605 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:57:59.729817 kubelet[2679]: E1101 01:57:59.729725 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wwdwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4r6nm_calico-system(66d2f097-1517-44b9-891a-35d40c5f36ae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 01:57:59.730991 kubelet[2679]: E1101 01:57:59.730933 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4r6nm" podUID="66d2f097-1517-44b9-891a-35d40c5f36ae" Nov 1 01:57:59.750000 audit[5705]: NETFILTER_CFG table=filter:126 family=2 entries=14 op=nft_register_rule pid=5705 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:57:59.750000 audit[5705]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc1a8a3ff0 a2=0 a3=7ffc1a8a3fdc items=0 ppid=2820 pid=5705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:59.750000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:57:59.759000 audit[5705]: NETFILTER_CFG table=nat:127 family=2 entries=44 op=nft_register_rule pid=5705 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:57:59.759000 audit[5705]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc1a8a3ff0 a2=0 a3=7ffc1a8a3fdc items=0 ppid=2820 pid=5705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:57:59.759000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:58:00.142569 systemd-networkd[1345]: calid3b5138066c: Gained IPv6LL Nov 1 01:58:00.461679 systemd-networkd[1345]: cali88680f3b3d7: Gained IPv6LL Nov 1 01:58:00.671456 kubelet[2679]: E1101 01:58:00.671291 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-d5g9p" podUID="381a5ea3-a9a9-42e2-8c3a-9c0b410afe13" Nov 1 01:58:00.673247 kubelet[2679]: E1101 01:58:00.673111 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4r6nm" podUID="66d2f097-1517-44b9-891a-35d40c5f36ae" Nov 1 01:58:00.766000 audit[5707]: NETFILTER_CFG table=filter:128 family=2 entries=14 op=nft_register_rule pid=5707 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:58:00.795475 kernel: kauditd_printk_skb: 47 callbacks suppressed Nov 1 01:58:00.795554 kernel: audit: type=1325 audit(1761962280.766:409): table=filter:128 family=2 entries=14 op=nft_register_rule pid=5707 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:58:00.766000 audit[5707]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd0b8a6d20 a2=0 a3=7ffd0b8a6d0c items=0 ppid=2820 pid=5707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:58:00.938407 kernel: audit: type=1300 audit(1761962280.766:409): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd0b8a6d20 a2=0 a3=7ffd0b8a6d0c items=0 ppid=2820 pid=5707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:58:00.938457 kernel: audit: type=1327 audit(1761962280.766:409): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:58:00.766000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:58:00.996000 audit[5707]: NETFILTER_CFG table=nat:129 family=2 entries=56 op=nft_register_chain pid=5707 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:58:00.996000 audit[5707]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffd0b8a6d20 a2=0 a3=7ffd0b8a6d0c items=0 ppid=2820 pid=5707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:58:01.140306 kernel: audit: type=1325 audit(1761962280.996:410): table=nat:129 family=2 entries=56 op=nft_register_chain pid=5707 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:58:01.140350 kernel: audit: type=1300 audit(1761962280.996:410): arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffd0b8a6d20 a2=0 a3=7ffd0b8a6d0c items=0 ppid=2820 pid=5707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:58:01.140366 kernel: audit: type=1327 audit(1761962280.996:410): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:58:00.996000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:58:03.514076 env[1679]: time="2025-11-01T01:58:03.513848126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 01:58:03.859917 env[1679]: time="2025-11-01T01:58:03.859659332Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:58:03.860762 env[1679]: time="2025-11-01T01:58:03.860602615Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 01:58:03.861137 kubelet[2679]: E1101 01:58:03.861056 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:58:03.861993 kubelet[2679]: E1101 01:58:03.861158 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:58:03.861993 kubelet[2679]: E1101 01:58:03.861494 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b3852adad46a4293a11e539c5e005d65,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pgwpl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b79df9786-ds9vj_calico-system(aabe0a9d-10db-49d2-a1d8-2a8011591b5d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 01:58:03.864475 env[1679]: time="2025-11-01T01:58:03.864403442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 01:58:04.217309 env[1679]: time="2025-11-01T01:58:04.217030147Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:58:04.218212 env[1679]: time="2025-11-01T01:58:04.218043173Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 01:58:04.218734 kubelet[2679]: E1101 01:58:04.218608 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:58:04.218981 kubelet[2679]: E1101 01:58:04.218731 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:58:04.219184 kubelet[2679]: E1101 01:58:04.219024 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pgwpl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b79df9786-ds9vj_calico-system(aabe0a9d-10db-49d2-a1d8-2a8011591b5d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 01:58:04.220600 kubelet[2679]: E1101 01:58:04.220470 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b79df9786-ds9vj" podUID="aabe0a9d-10db-49d2-a1d8-2a8011591b5d" Nov 1 01:58:09.513784 env[1679]: time="2025-11-01T01:58:09.513693074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 01:58:09.851427 env[1679]: time="2025-11-01T01:58:09.851156089Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:58:09.852105 env[1679]: time="2025-11-01T01:58:09.851977886Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 01:58:09.852560 kubelet[2679]: E1101 01:58:09.852425 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:58:09.852560 kubelet[2679]: E1101 01:58:09.852541 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:58:09.853812 kubelet[2679]: E1101 01:58:09.852847 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rk5mm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5ddd8b55c8-kbtkg_calico-system(792abee8-a81f-4cb1-9ede-47798a35f0b4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 01:58:09.854383 kubelet[2679]: E1101 01:58:09.854203 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ddd8b55c8-kbtkg" podUID="792abee8-a81f-4cb1-9ede-47798a35f0b4" Nov 1 01:58:11.513753 env[1679]: time="2025-11-01T01:58:11.513664574Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:58:11.859126 env[1679]: time="2025-11-01T01:58:11.858846374Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:58:11.863590 env[1679]: time="2025-11-01T01:58:11.863399635Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:58:11.863959 kubelet[2679]: E1101 01:58:11.863875 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:58:11.865066 kubelet[2679]: E1101 01:58:11.863972 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:58:11.865066 kubelet[2679]: E1101 01:58:11.864531 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n9792,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-fbf49c57b-msb77_calico-apiserver(66ab6902-4483-4337-8905-71710abec0d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:58:11.865663 env[1679]: time="2025-11-01T01:58:11.864679297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 01:58:11.865998 kubelet[2679]: E1101 01:58:11.865923 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-msb77" podUID="66ab6902-4483-4337-8905-71710abec0d5" Nov 1 01:58:12.202562 env[1679]: time="2025-11-01T01:58:12.202298960Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:58:12.203390 env[1679]: time="2025-11-01T01:58:12.203254690Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 01:58:12.203834 kubelet[2679]: E1101 01:58:12.203709 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:58:12.203834 kubelet[2679]: E1101 01:58:12.203810 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:58:12.204236 kubelet[2679]: E1101 01:58:12.204102 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-45r66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-bm44l_calico-system(6bd4bd36-d549-4194-a331-51709a095bb2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 01:58:12.205542 kubelet[2679]: E1101 01:58:12.205451 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bm44l" podUID="6bd4bd36-d549-4194-a331-51709a095bb2" Nov 1 01:58:12.514604 env[1679]: time="2025-11-01T01:58:12.514473431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:58:12.886030 env[1679]: time="2025-11-01T01:58:12.885761918Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:58:12.902978 env[1679]: time="2025-11-01T01:58:12.902817109Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:58:12.903289 kubelet[2679]: E1101 01:58:12.903213 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:58:12.904092 kubelet[2679]: E1101 01:58:12.903312 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:58:12.904092 kubelet[2679]: E1101 01:58:12.903604 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-shhx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-fbf49c57b-d5g9p_calico-apiserver(381a5ea3-a9a9-42e2-8c3a-9c0b410afe13): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:58:12.905032 kubelet[2679]: E1101 01:58:12.904941 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-d5g9p" podUID="381a5ea3-a9a9-42e2-8c3a-9c0b410afe13" Nov 1 01:58:13.514254 env[1679]: time="2025-11-01T01:58:13.514109054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 01:58:13.874862 env[1679]: time="2025-11-01T01:58:13.874597370Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:58:13.876123 env[1679]: time="2025-11-01T01:58:13.875699038Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 01:58:13.876290 kubelet[2679]: E1101 01:58:13.876185 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:58:13.876523 kubelet[2679]: E1101 01:58:13.876300 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:58:13.876788 kubelet[2679]: E1101 01:58:13.876638 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wwdwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4r6nm_calico-system(66d2f097-1517-44b9-891a-35d40c5f36ae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 01:58:13.879762 env[1679]: time="2025-11-01T01:58:13.879679011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 01:58:14.220064 env[1679]: time="2025-11-01T01:58:14.219801564Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:58:14.220814 env[1679]: time="2025-11-01T01:58:14.220684807Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 01:58:14.221206 kubelet[2679]: E1101 01:58:14.221098 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:58:14.222060 kubelet[2679]: E1101 01:58:14.221204 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:58:14.222060 kubelet[2679]: E1101 01:58:14.221501 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wwdwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4r6nm_calico-system(66d2f097-1517-44b9-891a-35d40c5f36ae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 01:58:14.222960 kubelet[2679]: E1101 01:58:14.222828 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4r6nm" podUID="66d2f097-1517-44b9-891a-35d40c5f36ae" Nov 1 01:58:14.444705 env[1679]: time="2025-11-01T01:58:14.444630019Z" level=info msg="StopPodSandbox for \"0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71\"" Nov 1 01:58:14.519486 env[1679]: 2025-11-01 01:58:14.494 [WARNING][5736] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--rfczw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"af91c5b4-018e-48fd-aa87-a2db911b8a67", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 57, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0f05b56927", ContainerID:"fef1fe966e9a659a6ef270e96b3bc5d62cecf5ff468b9c7cd4149a9a3d5658cf", Pod:"coredns-668d6bf9bc-rfczw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali88680f3b3d7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:58:14.519486 env[1679]: 2025-11-01 01:58:14.494 [INFO][5736] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71" Nov 1 01:58:14.519486 env[1679]: 2025-11-01 01:58:14.494 [INFO][5736] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71" iface="eth0" netns="" Nov 1 01:58:14.519486 env[1679]: 2025-11-01 01:58:14.494 [INFO][5736] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71" Nov 1 01:58:14.519486 env[1679]: 2025-11-01 01:58:14.494 [INFO][5736] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71" Nov 1 01:58:14.519486 env[1679]: 2025-11-01 01:58:14.511 [INFO][5754] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71" HandleID="k8s-pod-network.0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71" Workload="ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--rfczw-eth0" Nov 1 01:58:14.519486 env[1679]: 2025-11-01 01:58:14.511 [INFO][5754] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:58:14.519486 env[1679]: 2025-11-01 01:58:14.511 [INFO][5754] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:58:14.519486 env[1679]: 2025-11-01 01:58:14.516 [WARNING][5754] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71" HandleID="k8s-pod-network.0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71" Workload="ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--rfczw-eth0" Nov 1 01:58:14.519486 env[1679]: 2025-11-01 01:58:14.516 [INFO][5754] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71" HandleID="k8s-pod-network.0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71" Workload="ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--rfczw-eth0" Nov 1 01:58:14.519486 env[1679]: 2025-11-01 01:58:14.517 [INFO][5754] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:58:14.519486 env[1679]: 2025-11-01 01:58:14.518 [INFO][5736] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71" Nov 1 01:58:14.519486 env[1679]: time="2025-11-01T01:58:14.519477719Z" level=info msg="TearDown network for sandbox \"0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71\" successfully" Nov 1 01:58:14.520066 env[1679]: time="2025-11-01T01:58:14.519497954Z" level=info msg="StopPodSandbox for \"0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71\" returns successfully" Nov 1 01:58:14.520066 env[1679]: time="2025-11-01T01:58:14.519802635Z" level=info msg="RemovePodSandbox for \"0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71\"" Nov 1 01:58:14.520066 env[1679]: time="2025-11-01T01:58:14.519830892Z" level=info msg="Forcibly stopping sandbox \"0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71\"" Nov 1 01:58:14.554458 env[1679]: 2025-11-01 01:58:14.537 [WARNING][5781] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--rfczw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"af91c5b4-018e-48fd-aa87-a2db911b8a67", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 57, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0f05b56927", ContainerID:"fef1fe966e9a659a6ef270e96b3bc5d62cecf5ff468b9c7cd4149a9a3d5658cf", Pod:"coredns-668d6bf9bc-rfczw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali88680f3b3d7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:58:14.554458 env[1679]: 2025-11-01 01:58:14.537 [INFO][5781] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71" Nov 1 01:58:14.554458 env[1679]: 2025-11-01 01:58:14.537 [INFO][5781] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71" iface="eth0" netns="" Nov 1 01:58:14.554458 env[1679]: 2025-11-01 01:58:14.537 [INFO][5781] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71" Nov 1 01:58:14.554458 env[1679]: 2025-11-01 01:58:14.537 [INFO][5781] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71" Nov 1 01:58:14.554458 env[1679]: 2025-11-01 01:58:14.547 [INFO][5798] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71" HandleID="k8s-pod-network.0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71" Workload="ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--rfczw-eth0" Nov 1 01:58:14.554458 env[1679]: 2025-11-01 01:58:14.547 [INFO][5798] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:58:14.554458 env[1679]: 2025-11-01 01:58:14.547 [INFO][5798] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:58:14.554458 env[1679]: 2025-11-01 01:58:14.551 [WARNING][5798] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71" HandleID="k8s-pod-network.0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71" Workload="ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--rfczw-eth0" Nov 1 01:58:14.554458 env[1679]: 2025-11-01 01:58:14.551 [INFO][5798] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71" HandleID="k8s-pod-network.0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71" Workload="ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--rfczw-eth0" Nov 1 01:58:14.554458 env[1679]: 2025-11-01 01:58:14.552 [INFO][5798] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:58:14.554458 env[1679]: 2025-11-01 01:58:14.553 [INFO][5781] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71" Nov 1 01:58:14.554811 env[1679]: time="2025-11-01T01:58:14.554469424Z" level=info msg="TearDown network for sandbox \"0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71\" successfully" Nov 1 01:58:14.567001 env[1679]: time="2025-11-01T01:58:14.566953412Z" level=info msg="RemovePodSandbox \"0098145c429f01d528e0ab458acf53022e278faf1eb947ba9ebee896ef3fff71\" returns successfully" Nov 1 01:58:14.567243 env[1679]: time="2025-11-01T01:58:14.567223438Z" level=info msg="StopPodSandbox for \"f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e\"" Nov 1 01:58:14.616179 env[1679]: 2025-11-01 01:58:14.591 [WARNING][5823] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--spzcr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"183529c2-fd5c-4a2e-b002-133e45559e04", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 57, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0f05b56927", ContainerID:"17acacc99b88c70c5549b73d5c9f497f61cf120f12787cf7cd20725c16309207", Pod:"coredns-668d6bf9bc-spzcr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali759051e6e86", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:58:14.616179 env[1679]: 2025-11-01 01:58:14.591 [INFO][5823] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e" Nov 1 01:58:14.616179 env[1679]: 2025-11-01 01:58:14.591 [INFO][5823] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e" iface="eth0" netns="" Nov 1 01:58:14.616179 env[1679]: 2025-11-01 01:58:14.591 [INFO][5823] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e" Nov 1 01:58:14.616179 env[1679]: 2025-11-01 01:58:14.591 [INFO][5823] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e" Nov 1 01:58:14.616179 env[1679]: 2025-11-01 01:58:14.606 [INFO][5840] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e" HandleID="k8s-pod-network.f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e" Workload="ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--spzcr-eth0" Nov 1 01:58:14.616179 env[1679]: 2025-11-01 01:58:14.607 [INFO][5840] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:58:14.616179 env[1679]: 2025-11-01 01:58:14.607 [INFO][5840] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:58:14.616179 env[1679]: 2025-11-01 01:58:14.612 [WARNING][5840] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e" HandleID="k8s-pod-network.f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e" Workload="ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--spzcr-eth0" Nov 1 01:58:14.616179 env[1679]: 2025-11-01 01:58:14.612 [INFO][5840] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e" HandleID="k8s-pod-network.f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e" Workload="ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--spzcr-eth0" Nov 1 01:58:14.616179 env[1679]: 2025-11-01 01:58:14.614 [INFO][5840] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:58:14.616179 env[1679]: 2025-11-01 01:58:14.615 [INFO][5823] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e" Nov 1 01:58:14.616922 env[1679]: time="2025-11-01T01:58:14.616199757Z" level=info msg="TearDown network for sandbox \"f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e\" successfully" Nov 1 01:58:14.616922 env[1679]: time="2025-11-01T01:58:14.616225230Z" level=info msg="StopPodSandbox for \"f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e\" returns successfully" Nov 1 01:58:14.616922 env[1679]: time="2025-11-01T01:58:14.616584594Z" level=info msg="RemovePodSandbox for \"f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e\"" Nov 1 01:58:14.616922 env[1679]: time="2025-11-01T01:58:14.616619900Z" level=info msg="Forcibly stopping sandbox \"f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e\"" Nov 1 01:58:14.667269 env[1679]: 2025-11-01 01:58:14.642 [WARNING][5865] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--spzcr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"183529c2-fd5c-4a2e-b002-133e45559e04", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 57, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0f05b56927", ContainerID:"17acacc99b88c70c5549b73d5c9f497f61cf120f12787cf7cd20725c16309207", Pod:"coredns-668d6bf9bc-spzcr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali759051e6e86", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:58:14.667269 env[1679]: 2025-11-01 01:58:14.643 [INFO][5865] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e" Nov 1 01:58:14.667269 env[1679]: 2025-11-01 01:58:14.643 [INFO][5865] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e" iface="eth0" netns="" Nov 1 01:58:14.667269 env[1679]: 2025-11-01 01:58:14.643 [INFO][5865] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e" Nov 1 01:58:14.667269 env[1679]: 2025-11-01 01:58:14.643 [INFO][5865] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e" Nov 1 01:58:14.667269 env[1679]: 2025-11-01 01:58:14.657 [INFO][5885] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e" HandleID="k8s-pod-network.f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e" Workload="ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--spzcr-eth0" Nov 1 01:58:14.667269 env[1679]: 2025-11-01 01:58:14.658 [INFO][5885] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:58:14.667269 env[1679]: 2025-11-01 01:58:14.658 [INFO][5885] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:58:14.667269 env[1679]: 2025-11-01 01:58:14.663 [WARNING][5885] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e" HandleID="k8s-pod-network.f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e" Workload="ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--spzcr-eth0" Nov 1 01:58:14.667269 env[1679]: 2025-11-01 01:58:14.663 [INFO][5885] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e" HandleID="k8s-pod-network.f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e" Workload="ci--3510.3.8--n--0f05b56927-k8s-coredns--668d6bf9bc--spzcr-eth0" Nov 1 01:58:14.667269 env[1679]: 2025-11-01 01:58:14.665 [INFO][5885] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:58:14.667269 env[1679]: 2025-11-01 01:58:14.665 [INFO][5865] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e" Nov 1 01:58:14.668042 env[1679]: time="2025-11-01T01:58:14.667297950Z" level=info msg="TearDown network for sandbox \"f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e\" successfully" Nov 1 01:58:14.669503 env[1679]: time="2025-11-01T01:58:14.669454445Z" level=info msg="RemovePodSandbox \"f1a943a86732aaeda7dc14fbf8c50eac88c3428d197ee386a60d32b06929276e\" returns successfully" Nov 1 01:58:14.669821 env[1679]: time="2025-11-01T01:58:14.669771079Z" level=info msg="StopPodSandbox for \"a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a\"" Nov 1 01:58:14.767261 env[1679]: 2025-11-01 01:58:14.702 [WARNING][5908] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--msb77-eth0", GenerateName:"calico-apiserver-fbf49c57b-", Namespace:"calico-apiserver", SelfLink:"", UID:"66ab6902-4483-4337-8905-71710abec0d5", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 57, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fbf49c57b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0f05b56927", ContainerID:"1d8c65593819bf9e26f9d5d6b411232c0216b1614a12c68cfc5505d454e71269", Pod:"calico-apiserver-fbf49c57b-msb77", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7ae53e99337", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:58:14.767261 env[1679]: 2025-11-01 01:58:14.702 [INFO][5908] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a" Nov 1 01:58:14.767261 env[1679]: 2025-11-01 01:58:14.702 [INFO][5908] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a" iface="eth0" netns="" Nov 1 01:58:14.767261 env[1679]: 2025-11-01 01:58:14.702 [INFO][5908] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a" Nov 1 01:58:14.767261 env[1679]: 2025-11-01 01:58:14.702 [INFO][5908] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a" Nov 1 01:58:14.767261 env[1679]: 2025-11-01 01:58:14.752 [INFO][5925] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a" HandleID="k8s-pod-network.a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a" Workload="ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--msb77-eth0" Nov 1 01:58:14.767261 env[1679]: 2025-11-01 01:58:14.753 [INFO][5925] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:58:14.767261 env[1679]: 2025-11-01 01:58:14.753 [INFO][5925] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:58:14.767261 env[1679]: 2025-11-01 01:58:14.761 [WARNING][5925] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a" HandleID="k8s-pod-network.a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a" Workload="ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--msb77-eth0" Nov 1 01:58:14.767261 env[1679]: 2025-11-01 01:58:14.761 [INFO][5925] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a" HandleID="k8s-pod-network.a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a" Workload="ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--msb77-eth0" Nov 1 01:58:14.767261 env[1679]: 2025-11-01 01:58:14.763 [INFO][5925] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:58:14.767261 env[1679]: 2025-11-01 01:58:14.765 [INFO][5908] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a" Nov 1 01:58:14.768131 env[1679]: time="2025-11-01T01:58:14.767291522Z" level=info msg="TearDown network for sandbox \"a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a\" successfully" Nov 1 01:58:14.768131 env[1679]: time="2025-11-01T01:58:14.767343812Z" level=info msg="StopPodSandbox for \"a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a\" returns successfully" Nov 1 01:58:14.768131 env[1679]: time="2025-11-01T01:58:14.767800908Z" level=info msg="RemovePodSandbox for \"a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a\"" Nov 1 01:58:14.768131 env[1679]: time="2025-11-01T01:58:14.767845549Z" level=info msg="Forcibly stopping sandbox \"a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a\"" Nov 1 01:58:14.849268 env[1679]: 2025-11-01 01:58:14.811 [WARNING][5952] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--msb77-eth0", GenerateName:"calico-apiserver-fbf49c57b-", Namespace:"calico-apiserver", SelfLink:"", UID:"66ab6902-4483-4337-8905-71710abec0d5", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 57, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fbf49c57b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0f05b56927", ContainerID:"1d8c65593819bf9e26f9d5d6b411232c0216b1614a12c68cfc5505d454e71269", Pod:"calico-apiserver-fbf49c57b-msb77", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7ae53e99337", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:58:14.849268 env[1679]: 2025-11-01 01:58:14.811 [INFO][5952] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a" Nov 1 01:58:14.849268 env[1679]: 2025-11-01 01:58:14.811 [INFO][5952] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a" iface="eth0" netns="" Nov 1 01:58:14.849268 env[1679]: 2025-11-01 01:58:14.811 [INFO][5952] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a" Nov 1 01:58:14.849268 env[1679]: 2025-11-01 01:58:14.811 [INFO][5952] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a" Nov 1 01:58:14.849268 env[1679]: 2025-11-01 01:58:14.836 [INFO][5969] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a" HandleID="k8s-pod-network.a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a" Workload="ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--msb77-eth0" Nov 1 01:58:14.849268 env[1679]: 2025-11-01 01:58:14.836 [INFO][5969] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:58:14.849268 env[1679]: 2025-11-01 01:58:14.836 [INFO][5969] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:58:14.849268 env[1679]: 2025-11-01 01:58:14.844 [WARNING][5969] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a" HandleID="k8s-pod-network.a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a" Workload="ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--msb77-eth0" Nov 1 01:58:14.849268 env[1679]: 2025-11-01 01:58:14.844 [INFO][5969] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a" HandleID="k8s-pod-network.a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a" Workload="ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--msb77-eth0" Nov 1 01:58:14.849268 env[1679]: 2025-11-01 01:58:14.846 [INFO][5969] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:58:14.849268 env[1679]: 2025-11-01 01:58:14.847 [INFO][5952] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a" Nov 1 01:58:14.849268 env[1679]: time="2025-11-01T01:58:14.849235083Z" level=info msg="TearDown network for sandbox \"a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a\" successfully" Nov 1 01:58:14.851682 env[1679]: time="2025-11-01T01:58:14.851654502Z" level=info msg="RemovePodSandbox \"a9241cb520cf8317b2fc9158282dde44faed2d341f86f8fcf6eafc008216461a\" returns successfully" Nov 1 01:58:14.852125 env[1679]: time="2025-11-01T01:58:14.852092864Z" level=info msg="StopPodSandbox for \"0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0\"" Nov 1 01:58:14.921200 env[1679]: 2025-11-01 01:58:14.887 [WARNING][5995] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0f05b56927-k8s-goldmane--666569f655--bm44l-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"6bd4bd36-d549-4194-a331-51709a095bb2", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 57, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0f05b56927", ContainerID:"406f245c8278006346a8e2bcd70c7b1b60f881071123bac8667bc60d5c2976fa", Pod:"goldmane-666569f655-bm44l", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.3.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib9f3fd44435", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:58:14.921200 env[1679]: 2025-11-01 01:58:14.887 [INFO][5995] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0" Nov 1 01:58:14.921200 env[1679]: 2025-11-01 01:58:14.887 [INFO][5995] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0" iface="eth0" netns="" Nov 1 01:58:14.921200 env[1679]: 2025-11-01 01:58:14.887 [INFO][5995] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0" Nov 1 01:58:14.921200 env[1679]: 2025-11-01 01:58:14.887 [INFO][5995] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0" Nov 1 01:58:14.921200 env[1679]: 2025-11-01 01:58:14.908 [INFO][6014] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0" HandleID="k8s-pod-network.0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0" Workload="ci--3510.3.8--n--0f05b56927-k8s-goldmane--666569f655--bm44l-eth0" Nov 1 01:58:14.921200 env[1679]: 2025-11-01 01:58:14.908 [INFO][6014] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:58:14.921200 env[1679]: 2025-11-01 01:58:14.908 [INFO][6014] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:58:14.921200 env[1679]: 2025-11-01 01:58:14.916 [WARNING][6014] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0" HandleID="k8s-pod-network.0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0" Workload="ci--3510.3.8--n--0f05b56927-k8s-goldmane--666569f655--bm44l-eth0" Nov 1 01:58:14.921200 env[1679]: 2025-11-01 01:58:14.916 [INFO][6014] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0" HandleID="k8s-pod-network.0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0" Workload="ci--3510.3.8--n--0f05b56927-k8s-goldmane--666569f655--bm44l-eth0" Nov 1 01:58:14.921200 env[1679]: 2025-11-01 01:58:14.918 [INFO][6014] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:58:14.921200 env[1679]: 2025-11-01 01:58:14.919 [INFO][5995] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0" Nov 1 01:58:14.922175 env[1679]: time="2025-11-01T01:58:14.921221059Z" level=info msg="TearDown network for sandbox \"0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0\" successfully" Nov 1 01:58:14.922175 env[1679]: time="2025-11-01T01:58:14.921261384Z" level=info msg="StopPodSandbox for \"0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0\" returns successfully" Nov 1 01:58:14.922175 env[1679]: time="2025-11-01T01:58:14.921695042Z" level=info msg="RemovePodSandbox for \"0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0\"" Nov 1 01:58:14.922175 env[1679]: time="2025-11-01T01:58:14.921727660Z" level=info msg="Forcibly stopping sandbox \"0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0\"" Nov 1 01:58:14.991315 env[1679]: 2025-11-01 01:58:14.956 [WARNING][6042] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0f05b56927-k8s-goldmane--666569f655--bm44l-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"6bd4bd36-d549-4194-a331-51709a095bb2", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 57, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0f05b56927", ContainerID:"406f245c8278006346a8e2bcd70c7b1b60f881071123bac8667bc60d5c2976fa", Pod:"goldmane-666569f655-bm44l", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.3.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib9f3fd44435", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:58:14.991315 env[1679]: 2025-11-01 01:58:14.957 [INFO][6042] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0" Nov 1 01:58:14.991315 env[1679]: 2025-11-01 01:58:14.957 [INFO][6042] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0" iface="eth0" netns="" Nov 1 01:58:14.991315 env[1679]: 2025-11-01 01:58:14.957 [INFO][6042] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0" Nov 1 01:58:14.991315 env[1679]: 2025-11-01 01:58:14.957 [INFO][6042] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0" Nov 1 01:58:14.991315 env[1679]: 2025-11-01 01:58:14.978 [INFO][6061] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0" HandleID="k8s-pod-network.0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0" Workload="ci--3510.3.8--n--0f05b56927-k8s-goldmane--666569f655--bm44l-eth0" Nov 1 01:58:14.991315 env[1679]: 2025-11-01 01:58:14.978 [INFO][6061] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:58:14.991315 env[1679]: 2025-11-01 01:58:14.978 [INFO][6061] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:58:14.991315 env[1679]: 2025-11-01 01:58:14.986 [WARNING][6061] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0" HandleID="k8s-pod-network.0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0" Workload="ci--3510.3.8--n--0f05b56927-k8s-goldmane--666569f655--bm44l-eth0" Nov 1 01:58:14.991315 env[1679]: 2025-11-01 01:58:14.986 [INFO][6061] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0" HandleID="k8s-pod-network.0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0" Workload="ci--3510.3.8--n--0f05b56927-k8s-goldmane--666569f655--bm44l-eth0" Nov 1 01:58:14.991315 env[1679]: 2025-11-01 01:58:14.988 [INFO][6061] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:58:14.991315 env[1679]: 2025-11-01 01:58:14.989 [INFO][6042] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0" Nov 1 01:58:14.991998 env[1679]: time="2025-11-01T01:58:14.991351994Z" level=info msg="TearDown network for sandbox \"0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0\" successfully" Nov 1 01:58:14.993793 env[1679]: time="2025-11-01T01:58:14.993735552Z" level=info msg="RemovePodSandbox \"0db0ae0c2fd7ae824d5f0759a9f783a63c08fcaa95351001bb1400b42d755de0\" returns successfully" Nov 1 01:58:14.994187 env[1679]: time="2025-11-01T01:58:14.994132017Z" level=info msg="StopPodSandbox for \"9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76\"" Nov 1 01:58:15.065050 env[1679]: 2025-11-01 01:58:15.032 [WARNING][6091] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--d5g9p-eth0", GenerateName:"calico-apiserver-fbf49c57b-", Namespace:"calico-apiserver", SelfLink:"", UID:"381a5ea3-a9a9-42e2-8c3a-9c0b410afe13", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 57, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fbf49c57b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0f05b56927", ContainerID:"53bcc1c78f9d5ad3776c5df08f665b8aac934f2ca1955f452ab49c2978e78c0d", Pod:"calico-apiserver-fbf49c57b-d5g9p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid3b5138066c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:58:15.065050 env[1679]: 2025-11-01 01:58:15.032 [INFO][6091] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76" Nov 1 01:58:15.065050 env[1679]: 2025-11-01 01:58:15.032 [INFO][6091] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76" iface="eth0" netns="" Nov 1 01:58:15.065050 env[1679]: 2025-11-01 01:58:15.032 [INFO][6091] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76" Nov 1 01:58:15.065050 env[1679]: 2025-11-01 01:58:15.032 [INFO][6091] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76" Nov 1 01:58:15.065050 env[1679]: 2025-11-01 01:58:15.052 [INFO][6110] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76" HandleID="k8s-pod-network.9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76" Workload="ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--d5g9p-eth0" Nov 1 01:58:15.065050 env[1679]: 2025-11-01 01:58:15.052 [INFO][6110] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:58:15.065050 env[1679]: 2025-11-01 01:58:15.052 [INFO][6110] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:58:15.065050 env[1679]: 2025-11-01 01:58:15.060 [WARNING][6110] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76" HandleID="k8s-pod-network.9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76" Workload="ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--d5g9p-eth0" Nov 1 01:58:15.065050 env[1679]: 2025-11-01 01:58:15.060 [INFO][6110] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76" HandleID="k8s-pod-network.9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76" Workload="ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--d5g9p-eth0" Nov 1 01:58:15.065050 env[1679]: 2025-11-01 01:58:15.062 [INFO][6110] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:58:15.065050 env[1679]: 2025-11-01 01:58:15.063 [INFO][6091] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76" Nov 1 01:58:15.065731 env[1679]: time="2025-11-01T01:58:15.065078434Z" level=info msg="TearDown network for sandbox \"9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76\" successfully" Nov 1 01:58:15.065731 env[1679]: time="2025-11-01T01:58:15.065112240Z" level=info msg="StopPodSandbox for \"9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76\" returns successfully" Nov 1 01:58:15.065731 env[1679]: time="2025-11-01T01:58:15.065550992Z" level=info msg="RemovePodSandbox for \"9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76\"" Nov 1 01:58:15.065731 env[1679]: time="2025-11-01T01:58:15.065586145Z" level=info msg="Forcibly stopping sandbox \"9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76\"" Nov 1 01:58:15.131924 env[1679]: 2025-11-01 01:58:15.099 [WARNING][6138] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--d5g9p-eth0", GenerateName:"calico-apiserver-fbf49c57b-", Namespace:"calico-apiserver", SelfLink:"", UID:"381a5ea3-a9a9-42e2-8c3a-9c0b410afe13", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 57, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fbf49c57b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0f05b56927", ContainerID:"53bcc1c78f9d5ad3776c5df08f665b8aac934f2ca1955f452ab49c2978e78c0d", Pod:"calico-apiserver-fbf49c57b-d5g9p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid3b5138066c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:58:15.131924 env[1679]: 2025-11-01 01:58:15.099 [INFO][6138] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76" Nov 1 01:58:15.131924 env[1679]: 2025-11-01 01:58:15.099 [INFO][6138] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76" iface="eth0" netns="" Nov 1 01:58:15.131924 env[1679]: 2025-11-01 01:58:15.099 [INFO][6138] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76" Nov 1 01:58:15.131924 env[1679]: 2025-11-01 01:58:15.099 [INFO][6138] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76" Nov 1 01:58:15.131924 env[1679]: 2025-11-01 01:58:15.119 [INFO][6155] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76" HandleID="k8s-pod-network.9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76" Workload="ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--d5g9p-eth0" Nov 1 01:58:15.131924 env[1679]: 2025-11-01 01:58:15.119 [INFO][6155] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:58:15.131924 env[1679]: 2025-11-01 01:58:15.119 [INFO][6155] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:58:15.131924 env[1679]: 2025-11-01 01:58:15.127 [WARNING][6155] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76" HandleID="k8s-pod-network.9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76" Workload="ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--d5g9p-eth0" Nov 1 01:58:15.131924 env[1679]: 2025-11-01 01:58:15.127 [INFO][6155] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76" HandleID="k8s-pod-network.9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76" Workload="ci--3510.3.8--n--0f05b56927-k8s-calico--apiserver--fbf49c57b--d5g9p-eth0" Nov 1 01:58:15.131924 env[1679]: 2025-11-01 01:58:15.129 [INFO][6155] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:58:15.131924 env[1679]: 2025-11-01 01:58:15.130 [INFO][6138] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76" Nov 1 01:58:15.131924 env[1679]: time="2025-11-01T01:58:15.131867814Z" level=info msg="TearDown network for sandbox \"9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76\" successfully" Nov 1 01:58:15.134302 env[1679]: time="2025-11-01T01:58:15.134274452Z" level=info msg="RemovePodSandbox \"9b1556d1ad7ff0ae460cc0226954b3bade4d4c0b23ec4549fddb26d3df051a76\" returns successfully" Nov 1 01:58:15.134786 env[1679]: time="2025-11-01T01:58:15.134754527Z" level=info msg="StopPodSandbox for \"b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04\"" Nov 1 01:58:15.201029 env[1679]: 2025-11-01 01:58:15.168 [WARNING][6183] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-whisker--5d9b556fcb--w5np7-eth0" Nov 1 01:58:15.201029 env[1679]: 2025-11-01 01:58:15.168 [INFO][6183] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04" Nov 1 01:58:15.201029 env[1679]: 2025-11-01 01:58:15.168 [INFO][6183] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04" iface="eth0" netns="" Nov 1 01:58:15.201029 env[1679]: 2025-11-01 01:58:15.168 [INFO][6183] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04" Nov 1 01:58:15.201029 env[1679]: 2025-11-01 01:58:15.168 [INFO][6183] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04" Nov 1 01:58:15.201029 env[1679]: 2025-11-01 01:58:15.188 [INFO][6202] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04" HandleID="k8s-pod-network.b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04" Workload="ci--3510.3.8--n--0f05b56927-k8s-whisker--5d9b556fcb--w5np7-eth0" Nov 1 01:58:15.201029 env[1679]: 2025-11-01 01:58:15.188 [INFO][6202] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:58:15.201029 env[1679]: 2025-11-01 01:58:15.188 [INFO][6202] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:58:15.201029 env[1679]: 2025-11-01 01:58:15.196 [WARNING][6202] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04" HandleID="k8s-pod-network.b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04" Workload="ci--3510.3.8--n--0f05b56927-k8s-whisker--5d9b556fcb--w5np7-eth0" Nov 1 01:58:15.201029 env[1679]: 2025-11-01 01:58:15.196 [INFO][6202] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04" HandleID="k8s-pod-network.b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04" Workload="ci--3510.3.8--n--0f05b56927-k8s-whisker--5d9b556fcb--w5np7-eth0" Nov 1 01:58:15.201029 env[1679]: 2025-11-01 01:58:15.198 [INFO][6202] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:58:15.201029 env[1679]: 2025-11-01 01:58:15.199 [INFO][6183] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04" Nov 1 01:58:15.201606 env[1679]: time="2025-11-01T01:58:15.201027998Z" level=info msg="TearDown network for sandbox \"b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04\" successfully" Nov 1 01:58:15.201606 env[1679]: time="2025-11-01T01:58:15.201064655Z" level=info msg="StopPodSandbox for \"b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04\" returns successfully" Nov 1 01:58:15.201606 env[1679]: time="2025-11-01T01:58:15.201480598Z" level=info msg="RemovePodSandbox for \"b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04\"" Nov 1 01:58:15.201606 env[1679]: time="2025-11-01T01:58:15.201518114Z" level=info msg="Forcibly stopping sandbox \"b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04\"" Nov 1 01:58:15.266827 env[1679]: 2025-11-01 01:58:15.234 [WARNING][6229] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04" WorkloadEndpoint="ci--3510.3.8--n--0f05b56927-k8s-whisker--5d9b556fcb--w5np7-eth0" Nov 1 01:58:15.266827 env[1679]: 2025-11-01 01:58:15.234 [INFO][6229] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04" Nov 1 01:58:15.266827 env[1679]: 2025-11-01 01:58:15.234 [INFO][6229] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04" iface="eth0" netns="" Nov 1 01:58:15.266827 env[1679]: 2025-11-01 01:58:15.234 [INFO][6229] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04" Nov 1 01:58:15.266827 env[1679]: 2025-11-01 01:58:15.234 [INFO][6229] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04" Nov 1 01:58:15.266827 env[1679]: 2025-11-01 01:58:15.255 [INFO][6248] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04" HandleID="k8s-pod-network.b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04" Workload="ci--3510.3.8--n--0f05b56927-k8s-whisker--5d9b556fcb--w5np7-eth0" Nov 1 01:58:15.266827 env[1679]: 2025-11-01 01:58:15.255 [INFO][6248] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:58:15.266827 env[1679]: 2025-11-01 01:58:15.255 [INFO][6248] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:58:15.266827 env[1679]: 2025-11-01 01:58:15.262 [WARNING][6248] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04" HandleID="k8s-pod-network.b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04" Workload="ci--3510.3.8--n--0f05b56927-k8s-whisker--5d9b556fcb--w5np7-eth0" Nov 1 01:58:15.266827 env[1679]: 2025-11-01 01:58:15.262 [INFO][6248] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04" HandleID="k8s-pod-network.b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04" Workload="ci--3510.3.8--n--0f05b56927-k8s-whisker--5d9b556fcb--w5np7-eth0" Nov 1 01:58:15.266827 env[1679]: 2025-11-01 01:58:15.263 [INFO][6248] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:58:15.266827 env[1679]: 2025-11-01 01:58:15.265 [INFO][6229] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04" Nov 1 01:58:15.267397 env[1679]: time="2025-11-01T01:58:15.266824448Z" level=info msg="TearDown network for sandbox \"b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04\" successfully" Nov 1 01:58:15.269167 env[1679]: time="2025-11-01T01:58:15.269112634Z" level=info msg="RemovePodSandbox \"b43df139afb6d9656126726e216967f138229c39c94691a2c891a39513af6f04\" returns successfully" Nov 1 01:58:15.269613 env[1679]: time="2025-11-01T01:58:15.269542299Z" level=info msg="StopPodSandbox for \"5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e\"" Nov 1 01:58:15.335700 env[1679]: 2025-11-01 01:58:15.303 [WARNING][6271] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0f05b56927-k8s-calico--kube--controllers--5ddd8b55c8--kbtkg-eth0", GenerateName:"calico-kube-controllers-5ddd8b55c8-", Namespace:"calico-system", SelfLink:"", UID:"792abee8-a81f-4cb1-9ede-47798a35f0b4", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 57, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5ddd8b55c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0f05b56927", ContainerID:"26940886dcc506224697f55b91ba12b68a01c40ff8bf08e469863efd5f272c09", Pod:"calico-kube-controllers-5ddd8b55c8-kbtkg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.3.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid3a6d4e0f08", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:58:15.335700 env[1679]: 2025-11-01 01:58:15.303 [INFO][6271] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e" Nov 1 01:58:15.335700 env[1679]: 2025-11-01 01:58:15.303 [INFO][6271] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e" iface="eth0" netns="" Nov 1 01:58:15.335700 env[1679]: 2025-11-01 01:58:15.303 [INFO][6271] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e" Nov 1 01:58:15.335700 env[1679]: 2025-11-01 01:58:15.303 [INFO][6271] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e" Nov 1 01:58:15.335700 env[1679]: 2025-11-01 01:58:15.324 [INFO][6289] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e" HandleID="k8s-pod-network.5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e" Workload="ci--3510.3.8--n--0f05b56927-k8s-calico--kube--controllers--5ddd8b55c8--kbtkg-eth0" Nov 1 01:58:15.335700 env[1679]: 2025-11-01 01:58:15.324 [INFO][6289] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:58:15.335700 env[1679]: 2025-11-01 01:58:15.324 [INFO][6289] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:58:15.335700 env[1679]: 2025-11-01 01:58:15.330 [WARNING][6289] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e" HandleID="k8s-pod-network.5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e" Workload="ci--3510.3.8--n--0f05b56927-k8s-calico--kube--controllers--5ddd8b55c8--kbtkg-eth0" Nov 1 01:58:15.335700 env[1679]: 2025-11-01 01:58:15.330 [INFO][6289] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e" HandleID="k8s-pod-network.5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e" Workload="ci--3510.3.8--n--0f05b56927-k8s-calico--kube--controllers--5ddd8b55c8--kbtkg-eth0" Nov 1 01:58:15.335700 env[1679]: 2025-11-01 01:58:15.332 [INFO][6289] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:58:15.335700 env[1679]: 2025-11-01 01:58:15.333 [INFO][6271] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e" Nov 1 01:58:15.336341 env[1679]: time="2025-11-01T01:58:15.335692032Z" level=info msg="TearDown network for sandbox \"5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e\" successfully" Nov 1 01:58:15.336341 env[1679]: time="2025-11-01T01:58:15.335727545Z" level=info msg="StopPodSandbox for \"5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e\" returns successfully" Nov 1 01:58:15.336341 env[1679]: time="2025-11-01T01:58:15.336138134Z" level=info msg="RemovePodSandbox for \"5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e\"" Nov 1 01:58:15.336341 env[1679]: time="2025-11-01T01:58:15.336178380Z" level=info msg="Forcibly stopping sandbox \"5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e\"" Nov 1 01:58:15.404503 env[1679]: 2025-11-01 01:58:15.370 [WARNING][6314] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0f05b56927-k8s-calico--kube--controllers--5ddd8b55c8--kbtkg-eth0", GenerateName:"calico-kube-controllers-5ddd8b55c8-", Namespace:"calico-system", SelfLink:"", UID:"792abee8-a81f-4cb1-9ede-47798a35f0b4", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 57, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5ddd8b55c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0f05b56927", ContainerID:"26940886dcc506224697f55b91ba12b68a01c40ff8bf08e469863efd5f272c09", Pod:"calico-kube-controllers-5ddd8b55c8-kbtkg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.3.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid3a6d4e0f08", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:58:15.404503 env[1679]: 2025-11-01 01:58:15.371 [INFO][6314] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e" Nov 1 01:58:15.404503 env[1679]: 2025-11-01 01:58:15.371 [INFO][6314] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e" iface="eth0" netns="" Nov 1 01:58:15.404503 env[1679]: 2025-11-01 01:58:15.371 [INFO][6314] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e" Nov 1 01:58:15.404503 env[1679]: 2025-11-01 01:58:15.371 [INFO][6314] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e" Nov 1 01:58:15.404503 env[1679]: 2025-11-01 01:58:15.391 [INFO][6333] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e" HandleID="k8s-pod-network.5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e" Workload="ci--3510.3.8--n--0f05b56927-k8s-calico--kube--controllers--5ddd8b55c8--kbtkg-eth0" Nov 1 01:58:15.404503 env[1679]: 2025-11-01 01:58:15.391 [INFO][6333] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:58:15.404503 env[1679]: 2025-11-01 01:58:15.392 [INFO][6333] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:58:15.404503 env[1679]: 2025-11-01 01:58:15.399 [WARNING][6333] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e" HandleID="k8s-pod-network.5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e" Workload="ci--3510.3.8--n--0f05b56927-k8s-calico--kube--controllers--5ddd8b55c8--kbtkg-eth0" Nov 1 01:58:15.404503 env[1679]: 2025-11-01 01:58:15.399 [INFO][6333] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e" HandleID="k8s-pod-network.5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e" Workload="ci--3510.3.8--n--0f05b56927-k8s-calico--kube--controllers--5ddd8b55c8--kbtkg-eth0" Nov 1 01:58:15.404503 env[1679]: 2025-11-01 01:58:15.401 [INFO][6333] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:58:15.404503 env[1679]: 2025-11-01 01:58:15.402 [INFO][6314] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e" Nov 1 01:58:15.404503 env[1679]: time="2025-11-01T01:58:15.404476891Z" level=info msg="TearDown network for sandbox \"5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e\" successfully" Nov 1 01:58:15.407313 env[1679]: time="2025-11-01T01:58:15.407283514Z" level=info msg="RemovePodSandbox \"5c3d7f1cc680da4b7780aba12862d92356816a9784b5941f4f812c5b44c92d9e\" returns successfully" Nov 1 01:58:15.407755 env[1679]: time="2025-11-01T01:58:15.407725556Z" level=info msg="StopPodSandbox for \"3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695\"" Nov 1 01:58:15.473479 env[1679]: 2025-11-01 01:58:15.441 [WARNING][6359] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0f05b56927-k8s-csi--node--driver--4r6nm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"66d2f097-1517-44b9-891a-35d40c5f36ae", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 57, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0f05b56927", ContainerID:"c537788ec2bde39ea8ec78f190cb44f16b41b1acde61717ff6d0d61a06c97471", Pod:"csi-node-driver-4r6nm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.3.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali94c1e45fb31", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:58:15.473479 env[1679]: 2025-11-01 01:58:15.441 [INFO][6359] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695" Nov 1 01:58:15.473479 env[1679]: 2025-11-01 01:58:15.441 [INFO][6359] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695" iface="eth0" netns="" Nov 1 01:58:15.473479 env[1679]: 2025-11-01 01:58:15.441 [INFO][6359] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695" Nov 1 01:58:15.473479 env[1679]: 2025-11-01 01:58:15.441 [INFO][6359] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695" Nov 1 01:58:15.473479 env[1679]: 2025-11-01 01:58:15.461 [INFO][6376] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695" HandleID="k8s-pod-network.3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695" Workload="ci--3510.3.8--n--0f05b56927-k8s-csi--node--driver--4r6nm-eth0" Nov 1 01:58:15.473479 env[1679]: 2025-11-01 01:58:15.461 [INFO][6376] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:58:15.473479 env[1679]: 2025-11-01 01:58:15.461 [INFO][6376] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:58:15.473479 env[1679]: 2025-11-01 01:58:15.468 [WARNING][6376] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695" HandleID="k8s-pod-network.3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695" Workload="ci--3510.3.8--n--0f05b56927-k8s-csi--node--driver--4r6nm-eth0" Nov 1 01:58:15.473479 env[1679]: 2025-11-01 01:58:15.468 [INFO][6376] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695" HandleID="k8s-pod-network.3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695" Workload="ci--3510.3.8--n--0f05b56927-k8s-csi--node--driver--4r6nm-eth0" Nov 1 01:58:15.473479 env[1679]: 2025-11-01 01:58:15.470 [INFO][6376] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:58:15.473479 env[1679]: 2025-11-01 01:58:15.471 [INFO][6359] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695" Nov 1 01:58:15.474114 env[1679]: time="2025-11-01T01:58:15.473482572Z" level=info msg="TearDown network for sandbox \"3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695\" successfully" Nov 1 01:58:15.474114 env[1679]: time="2025-11-01T01:58:15.473514135Z" level=info msg="StopPodSandbox for \"3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695\" returns successfully" Nov 1 01:58:15.474114 env[1679]: time="2025-11-01T01:58:15.473925988Z" level=info msg="RemovePodSandbox for \"3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695\"" Nov 1 01:58:15.474114 env[1679]: time="2025-11-01T01:58:15.473967447Z" level=info msg="Forcibly stopping sandbox \"3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695\"" Nov 1 01:58:15.542112 env[1679]: 2025-11-01 01:58:15.508 [WARNING][6404] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0f05b56927-k8s-csi--node--driver--4r6nm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"66d2f097-1517-44b9-891a-35d40c5f36ae", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 57, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0f05b56927", ContainerID:"c537788ec2bde39ea8ec78f190cb44f16b41b1acde61717ff6d0d61a06c97471", Pod:"csi-node-driver-4r6nm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.3.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali94c1e45fb31", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:58:15.542112 env[1679]: 2025-11-01 01:58:15.508 [INFO][6404] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695" Nov 1 01:58:15.542112 env[1679]: 2025-11-01 01:58:15.508 [INFO][6404] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695" iface="eth0" netns="" Nov 1 01:58:15.542112 env[1679]: 2025-11-01 01:58:15.508 [INFO][6404] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695" Nov 1 01:58:15.542112 env[1679]: 2025-11-01 01:58:15.508 [INFO][6404] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695" Nov 1 01:58:15.542112 env[1679]: 2025-11-01 01:58:15.530 [INFO][6421] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695" HandleID="k8s-pod-network.3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695" Workload="ci--3510.3.8--n--0f05b56927-k8s-csi--node--driver--4r6nm-eth0" Nov 1 01:58:15.542112 env[1679]: 2025-11-01 01:58:15.530 [INFO][6421] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:58:15.542112 env[1679]: 2025-11-01 01:58:15.530 [INFO][6421] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:58:15.542112 env[1679]: 2025-11-01 01:58:15.537 [WARNING][6421] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695" HandleID="k8s-pod-network.3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695" Workload="ci--3510.3.8--n--0f05b56927-k8s-csi--node--driver--4r6nm-eth0" Nov 1 01:58:15.542112 env[1679]: 2025-11-01 01:58:15.537 [INFO][6421] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695" HandleID="k8s-pod-network.3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695" Workload="ci--3510.3.8--n--0f05b56927-k8s-csi--node--driver--4r6nm-eth0" Nov 1 01:58:15.542112 env[1679]: 2025-11-01 01:58:15.538 [INFO][6421] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:58:15.542112 env[1679]: 2025-11-01 01:58:15.540 [INFO][6404] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695" Nov 1 01:58:15.542778 env[1679]: time="2025-11-01T01:58:15.542140614Z" level=info msg="TearDown network for sandbox \"3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695\" successfully" Nov 1 01:58:15.544561 env[1679]: time="2025-11-01T01:58:15.544530673Z" level=info msg="RemovePodSandbox \"3a61be309d6d45b34d6feec15f427bdda3d58ce8640d393a1449deee0fbe1695\" returns successfully" Nov 1 01:58:17.515348 kubelet[2679]: E1101 01:58:17.515203 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b79df9786-ds9vj" podUID="aabe0a9d-10db-49d2-a1d8-2a8011591b5d" Nov 1 01:58:22.512048 kubelet[2679]: E1101 01:58:22.512012 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ddd8b55c8-kbtkg" podUID="792abee8-a81f-4cb1-9ede-47798a35f0b4" Nov 1 01:58:23.513894 kubelet[2679]: E1101 01:58:23.513741 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-msb77" podUID="66ab6902-4483-4337-8905-71710abec0d5" Nov 1 01:58:23.513894 kubelet[2679]: E1101 01:58:23.513821 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bm44l" podUID="6bd4bd36-d549-4194-a331-51709a095bb2" Nov 1 01:58:24.519473 kubelet[2679]: E1101 01:58:24.519321 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-d5g9p" podUID="381a5ea3-a9a9-42e2-8c3a-9c0b410afe13" Nov 1 01:58:27.511823 kubelet[2679]: E1101 01:58:27.511795 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4r6nm" podUID="66d2f097-1517-44b9-891a-35d40c5f36ae" Nov 1 01:58:31.514810 env[1679]: time="2025-11-01T01:58:31.514648793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 01:58:31.856844 env[1679]: time="2025-11-01T01:58:31.856733983Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:58:31.857278 env[1679]: time="2025-11-01T01:58:31.857242152Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 01:58:31.857459 kubelet[2679]: E1101 01:58:31.857412 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:58:31.857774 kubelet[2679]: E1101 01:58:31.857467 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:58:31.857774 kubelet[2679]: E1101 01:58:31.857575 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b3852adad46a4293a11e539c5e005d65,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pgwpl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b79df9786-ds9vj_calico-system(aabe0a9d-10db-49d2-a1d8-2a8011591b5d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 01:58:31.860070 env[1679]: time="2025-11-01T01:58:31.860040803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 01:58:32.192916 env[1679]: time="2025-11-01T01:58:32.192813306Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:58:32.193301 env[1679]: time="2025-11-01T01:58:32.193256641Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 01:58:32.193466 kubelet[2679]: E1101 01:58:32.193420 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:58:32.193466 kubelet[2679]: E1101 01:58:32.193452 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:58:32.193562 kubelet[2679]: E1101 01:58:32.193516 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pgwpl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b79df9786-ds9vj_calico-system(aabe0a9d-10db-49d2-a1d8-2a8011591b5d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 01:58:32.194760 kubelet[2679]: E1101 01:58:32.194713 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b79df9786-ds9vj" podUID="aabe0a9d-10db-49d2-a1d8-2a8011591b5d" Nov 1 01:58:34.515796 env[1679]: time="2025-11-01T01:58:34.515699208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 01:58:34.874921 env[1679]: time="2025-11-01T01:58:34.874805764Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:58:34.875342 env[1679]: time="2025-11-01T01:58:34.875287098Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 01:58:34.875548 kubelet[2679]: E1101 01:58:34.875511 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:58:34.875850 kubelet[2679]: E1101 01:58:34.875563 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:58:34.875850 kubelet[2679]: E1101 01:58:34.875683 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-45r66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-bm44l_calico-system(6bd4bd36-d549-4194-a331-51709a095bb2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 01:58:34.876920 kubelet[2679]: E1101 01:58:34.876865 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bm44l" podUID="6bd4bd36-d549-4194-a331-51709a095bb2" Nov 1 01:58:35.512178 env[1679]: time="2025-11-01T01:58:35.512125465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:58:35.854364 env[1679]: time="2025-11-01T01:58:35.854095763Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:58:35.855623 env[1679]: time="2025-11-01T01:58:35.855162260Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:58:35.855802 kubelet[2679]: E1101 01:58:35.855579 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:58:35.855802 kubelet[2679]: E1101 01:58:35.855682 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:58:35.856193 kubelet[2679]: E1101 01:58:35.855957 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n9792,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-fbf49c57b-msb77_calico-apiserver(66ab6902-4483-4337-8905-71710abec0d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:58:35.857401 kubelet[2679]: E1101 01:58:35.857287 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-msb77" podUID="66ab6902-4483-4337-8905-71710abec0d5" Nov 1 01:58:36.512932 env[1679]: time="2025-11-01T01:58:36.512886057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:58:36.873958 env[1679]: time="2025-11-01T01:58:36.873873064Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:58:36.874317 env[1679]: time="2025-11-01T01:58:36.874295709Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:58:36.874472 kubelet[2679]: E1101 01:58:36.874448 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:58:36.874634 kubelet[2679]: E1101 01:58:36.874481 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:58:36.874634 kubelet[2679]: E1101 01:58:36.874559 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-shhx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-fbf49c57b-d5g9p_calico-apiserver(381a5ea3-a9a9-42e2-8c3a-9c0b410afe13): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:58:36.875685 kubelet[2679]: E1101 01:58:36.875665 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-d5g9p" podUID="381a5ea3-a9a9-42e2-8c3a-9c0b410afe13" Nov 1 01:58:37.514926 env[1679]: time="2025-11-01T01:58:37.514829648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 01:58:37.876830 env[1679]: time="2025-11-01T01:58:37.876558462Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:58:37.877801 env[1679]: time="2025-11-01T01:58:37.877314935Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 01:58:37.877965 kubelet[2679]: E1101 01:58:37.877799 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:58:37.877965 kubelet[2679]: E1101 01:58:37.877924 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:58:37.878817 kubelet[2679]: E1101 01:58:37.878251 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rk5mm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5ddd8b55c8-kbtkg_calico-system(792abee8-a81f-4cb1-9ede-47798a35f0b4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 01:58:37.879778 kubelet[2679]: E1101 01:58:37.879668 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ddd8b55c8-kbtkg" podUID="792abee8-a81f-4cb1-9ede-47798a35f0b4" Nov 1 01:58:41.511919 env[1679]: time="2025-11-01T01:58:41.511887278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 01:58:41.848113 env[1679]: time="2025-11-01T01:58:41.848031501Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:58:41.848664 env[1679]: time="2025-11-01T01:58:41.848636842Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 01:58:41.848851 kubelet[2679]: E1101 01:58:41.848826 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:58:41.849064 kubelet[2679]: E1101 01:58:41.848861 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:58:41.849064 kubelet[2679]: E1101 01:58:41.848930 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wwdwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4r6nm_calico-system(66d2f097-1517-44b9-891a-35d40c5f36ae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 01:58:41.850679 env[1679]: time="2025-11-01T01:58:41.850666195Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 01:58:42.181178 env[1679]: time="2025-11-01T01:58:42.180999060Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:58:42.181882 env[1679]: time="2025-11-01T01:58:42.181793386Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 01:58:42.182178 kubelet[2679]: E1101 01:58:42.182111 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:58:42.182361 kubelet[2679]: E1101 01:58:42.182193 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:58:42.182495 kubelet[2679]: E1101 01:58:42.182419 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wwdwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4r6nm_calico-system(66d2f097-1517-44b9-891a-35d40c5f36ae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 01:58:42.183795 kubelet[2679]: E1101 01:58:42.183714 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4r6nm" podUID="66d2f097-1517-44b9-891a-35d40c5f36ae" Nov 1 01:58:46.511992 kubelet[2679]: E1101 01:58:46.511948 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bm44l" podUID="6bd4bd36-d549-4194-a331-51709a095bb2" Nov 1 01:58:46.511992 kubelet[2679]: E1101 01:58:46.511990 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-msb77" podUID="66ab6902-4483-4337-8905-71710abec0d5" Nov 1 01:58:46.512411 kubelet[2679]: E1101 01:58:46.512201 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b79df9786-ds9vj" podUID="aabe0a9d-10db-49d2-a1d8-2a8011591b5d" Nov 1 01:58:47.513712 kubelet[2679]: E1101 01:58:47.513600 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-d5g9p" podUID="381a5ea3-a9a9-42e2-8c3a-9c0b410afe13" Nov 1 01:58:50.513583 kubelet[2679]: E1101 01:58:50.513484 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ddd8b55c8-kbtkg" podUID="792abee8-a81f-4cb1-9ede-47798a35f0b4" Nov 1 01:58:56.514191 kubelet[2679]: E1101 01:58:56.514068 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4r6nm" podUID="66d2f097-1517-44b9-891a-35d40c5f36ae" Nov 1 01:58:58.512144 kubelet[2679]: E1101 01:58:58.512122 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-d5g9p" podUID="381a5ea3-a9a9-42e2-8c3a-9c0b410afe13" Nov 1 01:58:58.512464 kubelet[2679]: E1101 01:58:58.512300 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b79df9786-ds9vj" podUID="aabe0a9d-10db-49d2-a1d8-2a8011591b5d" Nov 1 01:58:59.512278 kubelet[2679]: E1101 01:58:59.512250 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bm44l" podUID="6bd4bd36-d549-4194-a331-51709a095bb2" Nov 1 01:59:01.513104 kubelet[2679]: E1101 01:59:01.513039 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-msb77" podUID="66ab6902-4483-4337-8905-71710abec0d5" Nov 1 01:59:02.514135 kubelet[2679]: E1101 01:59:02.514100 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ddd8b55c8-kbtkg" podUID="792abee8-a81f-4cb1-9ede-47798a35f0b4" Nov 1 01:59:07.511982 kubelet[2679]: E1101 01:59:07.511914 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4r6nm" podUID="66d2f097-1517-44b9-891a-35d40c5f36ae" Nov 1 01:59:11.513432 kubelet[2679]: E1101 01:59:11.513309 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-d5g9p" podUID="381a5ea3-a9a9-42e2-8c3a-9c0b410afe13" Nov 1 01:59:12.514173 kubelet[2679]: E1101 01:59:12.514045 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bm44l" podUID="6bd4bd36-d549-4194-a331-51709a095bb2" Nov 1 01:59:13.512128 env[1679]: time="2025-11-01T01:59:13.512088407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 01:59:13.848644 env[1679]: time="2025-11-01T01:59:13.848559367Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:59:13.848990 env[1679]: time="2025-11-01T01:59:13.848964903Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 01:59:13.849171 kubelet[2679]: E1101 01:59:13.849148 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:59:13.849529 kubelet[2679]: E1101 01:59:13.849180 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:59:13.849529 kubelet[2679]: E1101 01:59:13.849249 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b3852adad46a4293a11e539c5e005d65,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pgwpl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b79df9786-ds9vj_calico-system(aabe0a9d-10db-49d2-a1d8-2a8011591b5d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 01:59:13.851055 env[1679]: time="2025-11-01T01:59:13.851025957Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 01:59:14.224294 env[1679]: time="2025-11-01T01:59:14.224123836Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:59:14.224992 env[1679]: time="2025-11-01T01:59:14.224887585Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 01:59:14.225187 kubelet[2679]: E1101 01:59:14.225140 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:59:14.225283 kubelet[2679]: E1101 01:59:14.225206 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:59:14.225452 kubelet[2679]: E1101 01:59:14.225364 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pgwpl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b79df9786-ds9vj_calico-system(aabe0a9d-10db-49d2-a1d8-2a8011591b5d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 01:59:14.226681 kubelet[2679]: E1101 01:59:14.226613 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b79df9786-ds9vj" podUID="aabe0a9d-10db-49d2-a1d8-2a8011591b5d" Nov 1 01:59:14.515269 kubelet[2679]: E1101 01:59:14.515171 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-msb77" podUID="66ab6902-4483-4337-8905-71710abec0d5" Nov 1 01:59:16.513298 kubelet[2679]: E1101 01:59:16.513197 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ddd8b55c8-kbtkg" podUID="792abee8-a81f-4cb1-9ede-47798a35f0b4" Nov 1 01:59:20.514665 kubelet[2679]: E1101 01:59:20.514546 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4r6nm" podUID="66d2f097-1517-44b9-891a-35d40c5f36ae" Nov 1 01:59:25.511920 env[1679]: time="2025-11-01T01:59:25.511866691Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:59:25.512285 kubelet[2679]: E1101 01:59:25.511929 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b79df9786-ds9vj" podUID="aabe0a9d-10db-49d2-a1d8-2a8011591b5d" Nov 1 01:59:25.853407 env[1679]: time="2025-11-01T01:59:25.853113232Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:59:25.854253 env[1679]: time="2025-11-01T01:59:25.854107965Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:59:25.854633 kubelet[2679]: E1101 01:59:25.854538 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:59:25.854844 kubelet[2679]: E1101 01:59:25.854648 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:59:25.855438 kubelet[2679]: E1101 01:59:25.855173 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-shhx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-fbf49c57b-d5g9p_calico-apiserver(381a5ea3-a9a9-42e2-8c3a-9c0b410afe13): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:59:25.855918 env[1679]: time="2025-11-01T01:59:25.855409674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 01:59:25.856703 kubelet[2679]: E1101 01:59:25.856592 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-d5g9p" podUID="381a5ea3-a9a9-42e2-8c3a-9c0b410afe13" Nov 1 01:59:26.202139 env[1679]: time="2025-11-01T01:59:26.201854437Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:59:26.203102 env[1679]: time="2025-11-01T01:59:26.202918103Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 01:59:26.203605 kubelet[2679]: E1101 01:59:26.203477 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:59:26.203605 kubelet[2679]: E1101 01:59:26.203593 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:59:26.204057 kubelet[2679]: E1101 01:59:26.203896 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-45r66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-bm44l_calico-system(6bd4bd36-d549-4194-a331-51709a095bb2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 01:59:26.205366 kubelet[2679]: E1101 01:59:26.205228 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bm44l" podUID="6bd4bd36-d549-4194-a331-51709a095bb2" Nov 1 01:59:26.512119 env[1679]: time="2025-11-01T01:59:26.512058894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:59:26.862376 env[1679]: time="2025-11-01T01:59:26.862197636Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:59:26.862880 env[1679]: time="2025-11-01T01:59:26.862795455Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:59:26.863108 kubelet[2679]: E1101 01:59:26.863034 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:59:26.863108 kubelet[2679]: E1101 01:59:26.863093 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:59:26.863655 kubelet[2679]: E1101 01:59:26.863261 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n9792,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-fbf49c57b-msb77_calico-apiserver(66ab6902-4483-4337-8905-71710abec0d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:59:26.864534 kubelet[2679]: E1101 01:59:26.864464 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-msb77" podUID="66ab6902-4483-4337-8905-71710abec0d5" Nov 1 01:59:28.514696 env[1679]: time="2025-11-01T01:59:28.514587549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 01:59:28.857235 env[1679]: time="2025-11-01T01:59:28.856951369Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:59:28.858242 env[1679]: time="2025-11-01T01:59:28.858092582Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 01:59:28.858725 kubelet[2679]: E1101 01:59:28.858591 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:59:28.858725 kubelet[2679]: E1101 01:59:28.858707 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:59:28.859793 kubelet[2679]: E1101 01:59:28.858995 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rk5mm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5ddd8b55c8-kbtkg_calico-system(792abee8-a81f-4cb1-9ede-47798a35f0b4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 01:59:28.860538 kubelet[2679]: E1101 01:59:28.860400 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ddd8b55c8-kbtkg" podUID="792abee8-a81f-4cb1-9ede-47798a35f0b4" Nov 1 01:59:32.514313 env[1679]: time="2025-11-01T01:59:32.514088113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 01:59:32.877482 env[1679]: time="2025-11-01T01:59:32.877378802Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:59:32.877885 env[1679]: time="2025-11-01T01:59:32.877827590Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 01:59:32.878031 kubelet[2679]: E1101 01:59:32.877982 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:59:32.878031 kubelet[2679]: E1101 01:59:32.878015 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:59:32.878272 kubelet[2679]: E1101 01:59:32.878089 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wwdwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4r6nm_calico-system(66d2f097-1517-44b9-891a-35d40c5f36ae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 01:59:32.879884 env[1679]: time="2025-11-01T01:59:32.879843548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 01:59:33.208814 env[1679]: time="2025-11-01T01:59:33.208587583Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:59:33.209476 env[1679]: time="2025-11-01T01:59:33.209367579Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 01:59:33.209842 kubelet[2679]: E1101 01:59:33.209757 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:59:33.210035 kubelet[2679]: E1101 01:59:33.209861 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:59:33.210218 kubelet[2679]: E1101 01:59:33.210120 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wwdwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4r6nm_calico-system(66d2f097-1517-44b9-891a-35d40c5f36ae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 01:59:33.211627 kubelet[2679]: E1101 01:59:33.211479 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4r6nm" podUID="66d2f097-1517-44b9-891a-35d40c5f36ae" Nov 1 01:59:37.512602 kubelet[2679]: E1101 01:59:37.512569 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bm44l" podUID="6bd4bd36-d549-4194-a331-51709a095bb2" Nov 1 01:59:38.515293 kubelet[2679]: E1101 01:59:38.515182 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b79df9786-ds9vj" podUID="aabe0a9d-10db-49d2-a1d8-2a8011591b5d" Nov 1 01:59:39.511893 kubelet[2679]: E1101 01:59:39.511840 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-d5g9p" podUID="381a5ea3-a9a9-42e2-8c3a-9c0b410afe13" Nov 1 01:59:39.511893 kubelet[2679]: E1101 01:59:39.511857 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-msb77" podUID="66ab6902-4483-4337-8905-71710abec0d5" Nov 1 01:59:44.513031 kubelet[2679]: E1101 01:59:44.512973 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ddd8b55c8-kbtkg" podUID="792abee8-a81f-4cb1-9ede-47798a35f0b4" Nov 1 01:59:47.511825 kubelet[2679]: E1101 01:59:47.511766 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4r6nm" podUID="66d2f097-1517-44b9-891a-35d40c5f36ae" Nov 1 01:59:49.513219 kubelet[2679]: E1101 01:59:49.513147 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b79df9786-ds9vj" podUID="aabe0a9d-10db-49d2-a1d8-2a8011591b5d" Nov 1 01:59:50.512382 kubelet[2679]: E1101 01:59:50.512347 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-d5g9p" podUID="381a5ea3-a9a9-42e2-8c3a-9c0b410afe13" Nov 1 01:59:52.513281 kubelet[2679]: E1101 01:59:52.513167 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bm44l" podUID="6bd4bd36-d549-4194-a331-51709a095bb2" Nov 1 01:59:53.511752 kubelet[2679]: E1101 01:59:53.511708 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-msb77" podUID="66ab6902-4483-4337-8905-71710abec0d5" Nov 1 01:59:55.512954 kubelet[2679]: E1101 01:59:55.512914 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ddd8b55c8-kbtkg" podUID="792abee8-a81f-4cb1-9ede-47798a35f0b4" Nov 1 02:00:00.513619 kubelet[2679]: E1101 02:00:00.513553 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4r6nm" podUID="66d2f097-1517-44b9-891a-35d40c5f36ae" Nov 1 02:00:01.512054 kubelet[2679]: E1101 02:00:01.512020 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-d5g9p" podUID="381a5ea3-a9a9-42e2-8c3a-9c0b410afe13" Nov 1 02:00:02.626232 systemd[1]: Started sshd@9-139.178.90.71:22-193.32.162.146:46098.service. Nov 1 02:00:02.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.90.71:22-193.32.162.146:46098 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:00:02.712337 kernel: audit: type=1130 audit(1761962402.625:411): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.90.71:22-193.32.162.146:46098 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:00:03.349464 sshd[6654]: Invalid user solana from 193.32.162.146 port 46098 Nov 1 02:00:03.512000 kubelet[2679]: E1101 02:00:03.511960 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b79df9786-ds9vj" podUID="aabe0a9d-10db-49d2-a1d8-2a8011591b5d" Nov 1 02:00:03.526113 sshd[6654]: pam_faillock(sshd:auth): User unknown Nov 1 02:00:03.526388 sshd[6654]: pam_unix(sshd:auth): check pass; user unknown Nov 1 02:00:03.526422 sshd[6654]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.32.162.146 Nov 1 02:00:03.526652 sshd[6654]: pam_faillock(sshd:auth): User unknown Nov 1 02:00:03.525000 audit[6654]: USER_AUTH pid=6654 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="solana" exe="/usr/sbin/sshd" hostname=193.32.162.146 addr=193.32.162.146 terminal=ssh res=failed' Nov 1 02:00:03.613367 kernel: audit: type=1100 audit(1761962403.525:412): pid=6654 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="solana" exe="/usr/sbin/sshd" hostname=193.32.162.146 addr=193.32.162.146 terminal=ssh res=failed' Nov 1 02:00:06.272564 sshd[6654]: Failed password for invalid user solana from 193.32.162.146 port 46098 ssh2 Nov 1 02:00:06.511767 kubelet[2679]: E1101 02:00:06.511673 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bm44l" podUID="6bd4bd36-d549-4194-a331-51709a095bb2" Nov 1 02:00:06.597588 sshd[6654]: Connection closed by invalid user solana 193.32.162.146 port 46098 [preauth] Nov 1 02:00:06.598322 systemd[1]: sshd@9-139.178.90.71:22-193.32.162.146:46098.service: Deactivated successfully. Nov 1 02:00:06.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.90.71:22-193.32.162.146:46098 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:00:06.683406 kernel: audit: type=1131 audit(1761962406.597:413): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.90.71:22-193.32.162.146:46098 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:00:08.514048 kubelet[2679]: E1101 02:00:08.513953 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-msb77" podUID="66ab6902-4483-4337-8905-71710abec0d5" Nov 1 02:00:09.512489 kubelet[2679]: E1101 02:00:09.512462 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ddd8b55c8-kbtkg" podUID="792abee8-a81f-4cb1-9ede-47798a35f0b4" Nov 1 02:00:14.520102 kubelet[2679]: E1101 02:00:14.519941 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4r6nm" podUID="66d2f097-1517-44b9-891a-35d40c5f36ae" Nov 1 02:00:15.513196 kubelet[2679]: E1101 02:00:15.513092 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-d5g9p" podUID="381a5ea3-a9a9-42e2-8c3a-9c0b410afe13" Nov 1 02:00:17.513763 kubelet[2679]: E1101 02:00:17.513688 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b79df9786-ds9vj" podUID="aabe0a9d-10db-49d2-a1d8-2a8011591b5d" Nov 1 02:00:18.514138 kubelet[2679]: E1101 02:00:18.513976 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bm44l" podUID="6bd4bd36-d549-4194-a331-51709a095bb2" Nov 1 02:00:20.513243 kubelet[2679]: E1101 02:00:20.513152 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-msb77" podUID="66ab6902-4483-4337-8905-71710abec0d5" Nov 1 02:00:21.512849 kubelet[2679]: E1101 02:00:21.512809 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ddd8b55c8-kbtkg" podUID="792abee8-a81f-4cb1-9ede-47798a35f0b4" Nov 1 02:00:25.512121 kubelet[2679]: E1101 02:00:25.512092 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4r6nm" podUID="66d2f097-1517-44b9-891a-35d40c5f36ae" Nov 1 02:00:28.514082 kubelet[2679]: E1101 02:00:28.513971 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-d5g9p" podUID="381a5ea3-a9a9-42e2-8c3a-9c0b410afe13" Nov 1 02:00:29.515677 kubelet[2679]: E1101 02:00:29.515574 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b79df9786-ds9vj" podUID="aabe0a9d-10db-49d2-a1d8-2a8011591b5d" Nov 1 02:00:31.513884 kubelet[2679]: E1101 02:00:31.513776 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-msb77" podUID="66ab6902-4483-4337-8905-71710abec0d5" Nov 1 02:00:32.514061 kubelet[2679]: E1101 02:00:32.513949 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ddd8b55c8-kbtkg" podUID="792abee8-a81f-4cb1-9ede-47798a35f0b4" Nov 1 02:00:32.515127 kubelet[2679]: E1101 02:00:32.514117 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bm44l" podUID="6bd4bd36-d549-4194-a331-51709a095bb2" Nov 1 02:00:38.511995 kubelet[2679]: E1101 02:00:38.511933 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4r6nm" podUID="66d2f097-1517-44b9-891a-35d40c5f36ae" Nov 1 02:00:41.513672 kubelet[2679]: E1101 02:00:41.513566 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-d5g9p" podUID="381a5ea3-a9a9-42e2-8c3a-9c0b410afe13" Nov 1 02:00:41.514958 env[1679]: time="2025-11-01T02:00:41.514165428Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 02:00:41.872138 env[1679]: time="2025-11-01T02:00:41.872037493Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:00:41.872518 env[1679]: time="2025-11-01T02:00:41.872461263Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 02:00:41.872650 kubelet[2679]: E1101 02:00:41.872610 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 02:00:41.872650 kubelet[2679]: E1101 02:00:41.872643 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 02:00:41.872775 kubelet[2679]: E1101 02:00:41.872708 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b3852adad46a4293a11e539c5e005d65,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pgwpl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b79df9786-ds9vj_calico-system(aabe0a9d-10db-49d2-a1d8-2a8011591b5d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 02:00:41.874296 env[1679]: time="2025-11-01T02:00:41.874283979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 02:00:42.205400 env[1679]: time="2025-11-01T02:00:42.205119149Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:00:42.206344 env[1679]: time="2025-11-01T02:00:42.206204102Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 02:00:42.206760 kubelet[2679]: E1101 02:00:42.206665 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 02:00:42.207056 kubelet[2679]: E1101 02:00:42.206783 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 02:00:42.207459 kubelet[2679]: E1101 02:00:42.207258 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pgwpl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b79df9786-ds9vj_calico-system(aabe0a9d-10db-49d2-a1d8-2a8011591b5d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 02:00:42.208741 kubelet[2679]: E1101 02:00:42.208638 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b79df9786-ds9vj" podUID="aabe0a9d-10db-49d2-a1d8-2a8011591b5d" Nov 1 02:00:43.512374 kubelet[2679]: E1101 02:00:43.512302 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-msb77" podUID="66ab6902-4483-4337-8905-71710abec0d5" Nov 1 02:00:43.512374 kubelet[2679]: E1101 02:00:43.512302 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bm44l" podUID="6bd4bd36-d549-4194-a331-51709a095bb2" Nov 1 02:00:45.513549 kubelet[2679]: E1101 02:00:45.513390 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ddd8b55c8-kbtkg" podUID="792abee8-a81f-4cb1-9ede-47798a35f0b4" Nov 1 02:00:49.512263 kubelet[2679]: E1101 02:00:49.512228 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4r6nm" podUID="66d2f097-1517-44b9-891a-35d40c5f36ae" Nov 1 02:00:52.513912 env[1679]: time="2025-11-01T02:00:52.513806832Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 02:00:52.877727 env[1679]: time="2025-11-01T02:00:52.877649439Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:00:52.886122 env[1679]: time="2025-11-01T02:00:52.886045007Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 02:00:52.886226 kubelet[2679]: E1101 02:00:52.886204 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 02:00:52.886448 kubelet[2679]: E1101 02:00:52.886235 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 02:00:52.886448 kubelet[2679]: E1101 02:00:52.886333 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-shhx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-fbf49c57b-d5g9p_calico-apiserver(381a5ea3-a9a9-42e2-8c3a-9c0b410afe13): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 02:00:52.887523 kubelet[2679]: E1101 02:00:52.887494 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-d5g9p" podUID="381a5ea3-a9a9-42e2-8c3a-9c0b410afe13" Nov 1 02:00:54.512974 env[1679]: time="2025-11-01T02:00:54.512942893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 02:00:54.857323 env[1679]: time="2025-11-01T02:00:54.857067201Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:00:54.858272 env[1679]: time="2025-11-01T02:00:54.858123032Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 02:00:54.858735 kubelet[2679]: E1101 02:00:54.858609 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 02:00:54.858735 kubelet[2679]: E1101 02:00:54.858724 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 02:00:54.860071 kubelet[2679]: E1101 02:00:54.859076 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-45r66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-bm44l_calico-system(6bd4bd36-d549-4194-a331-51709a095bb2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 02:00:54.860657 kubelet[2679]: E1101 02:00:54.860516 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bm44l" podUID="6bd4bd36-d549-4194-a331-51709a095bb2" Nov 1 02:00:57.512005 kubelet[2679]: E1101 02:00:57.511975 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b79df9786-ds9vj" podUID="aabe0a9d-10db-49d2-a1d8-2a8011591b5d" Nov 1 02:00:58.514290 env[1679]: time="2025-11-01T02:00:58.514192201Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 02:00:58.921139 env[1679]: time="2025-11-01T02:00:58.920870882Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:00:58.926674 env[1679]: time="2025-11-01T02:00:58.926513582Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 02:00:58.927054 kubelet[2679]: E1101 02:00:58.926932 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 02:00:58.927054 kubelet[2679]: E1101 02:00:58.927028 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 02:00:58.928047 kubelet[2679]: E1101 02:00:58.927317 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n9792,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-fbf49c57b-msb77_calico-apiserver(66ab6902-4483-4337-8905-71710abec0d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 02:00:58.928818 kubelet[2679]: E1101 02:00:58.928706 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-msb77" podUID="66ab6902-4483-4337-8905-71710abec0d5" Nov 1 02:01:00.515233 env[1679]: time="2025-11-01T02:01:00.515185925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 02:01:00.853315 env[1679]: time="2025-11-01T02:01:00.853221156Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:01:00.853652 env[1679]: time="2025-11-01T02:01:00.853621268Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 02:01:00.853806 kubelet[2679]: E1101 02:01:00.853775 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 02:01:00.854101 kubelet[2679]: E1101 02:01:00.853820 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 02:01:00.854101 kubelet[2679]: E1101 02:01:00.853928 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rk5mm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5ddd8b55c8-kbtkg_calico-system(792abee8-a81f-4cb1-9ede-47798a35f0b4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 02:01:00.855123 kubelet[2679]: E1101 02:01:00.855098 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ddd8b55c8-kbtkg" podUID="792abee8-a81f-4cb1-9ede-47798a35f0b4" Nov 1 02:01:04.518887 env[1679]: time="2025-11-01T02:01:04.518730686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 02:01:04.858444 env[1679]: time="2025-11-01T02:01:04.858142262Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:01:04.859223 env[1679]: time="2025-11-01T02:01:04.859086170Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 02:01:04.859621 kubelet[2679]: E1101 02:01:04.859495 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 02:01:04.859621 kubelet[2679]: E1101 02:01:04.859602 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 02:01:04.860682 kubelet[2679]: E1101 02:01:04.859863 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wwdwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4r6nm_calico-system(66d2f097-1517-44b9-891a-35d40c5f36ae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 02:01:04.862915 env[1679]: time="2025-11-01T02:01:04.862839091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 02:01:05.226904 env[1679]: time="2025-11-01T02:01:05.226628718Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:01:05.227547 env[1679]: time="2025-11-01T02:01:05.227398583Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 02:01:05.227965 kubelet[2679]: E1101 02:01:05.227834 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 02:01:05.227965 kubelet[2679]: E1101 02:01:05.227946 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 02:01:05.228313 kubelet[2679]: E1101 02:01:05.228202 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wwdwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4r6nm_calico-system(66d2f097-1517-44b9-891a-35d40c5f36ae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 02:01:05.229690 kubelet[2679]: E1101 02:01:05.229562 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4r6nm" podUID="66d2f097-1517-44b9-891a-35d40c5f36ae" Nov 1 02:01:07.512133 kubelet[2679]: E1101 02:01:07.512111 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-d5g9p" podUID="381a5ea3-a9a9-42e2-8c3a-9c0b410afe13" Nov 1 02:01:09.512300 kubelet[2679]: E1101 02:01:09.512229 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bm44l" podUID="6bd4bd36-d549-4194-a331-51709a095bb2" Nov 1 02:01:09.512737 kubelet[2679]: E1101 02:01:09.512500 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b79df9786-ds9vj" podUID="aabe0a9d-10db-49d2-a1d8-2a8011591b5d" Nov 1 02:01:13.511837 kubelet[2679]: E1101 02:01:13.511766 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ddd8b55c8-kbtkg" podUID="792abee8-a81f-4cb1-9ede-47798a35f0b4" Nov 1 02:01:14.514854 kubelet[2679]: E1101 02:01:14.514766 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-msb77" podUID="66ab6902-4483-4337-8905-71710abec0d5" Nov 1 02:01:19.513372 kubelet[2679]: E1101 02:01:19.513295 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4r6nm" podUID="66d2f097-1517-44b9-891a-35d40c5f36ae" Nov 1 02:01:21.513270 kubelet[2679]: E1101 02:01:21.513174 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bm44l" podUID="6bd4bd36-d549-4194-a331-51709a095bb2" Nov 1 02:01:22.522598 kubelet[2679]: E1101 02:01:22.522462 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-d5g9p" podUID="381a5ea3-a9a9-42e2-8c3a-9c0b410afe13" Nov 1 02:01:23.512365 kubelet[2679]: E1101 02:01:23.512331 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b79df9786-ds9vj" podUID="aabe0a9d-10db-49d2-a1d8-2a8011591b5d" Nov 1 02:01:27.513467 kubelet[2679]: E1101 02:01:27.513311 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ddd8b55c8-kbtkg" podUID="792abee8-a81f-4cb1-9ede-47798a35f0b4" Nov 1 02:01:29.512351 kubelet[2679]: E1101 02:01:29.512305 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-msb77" podUID="66ab6902-4483-4337-8905-71710abec0d5" Nov 1 02:01:33.512563 kubelet[2679]: E1101 02:01:33.512531 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4r6nm" podUID="66d2f097-1517-44b9-891a-35d40c5f36ae" Nov 1 02:01:34.512695 kubelet[2679]: E1101 02:01:34.512661 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-d5g9p" podUID="381a5ea3-a9a9-42e2-8c3a-9c0b410afe13" Nov 1 02:01:36.513300 kubelet[2679]: E1101 02:01:36.513194 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bm44l" podUID="6bd4bd36-d549-4194-a331-51709a095bb2" Nov 1 02:01:37.513512 kubelet[2679]: E1101 02:01:37.513413 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b79df9786-ds9vj" podUID="aabe0a9d-10db-49d2-a1d8-2a8011591b5d" Nov 1 02:01:40.514208 kubelet[2679]: E1101 02:01:40.514126 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-msb77" podUID="66ab6902-4483-4337-8905-71710abec0d5" Nov 1 02:01:41.511909 kubelet[2679]: E1101 02:01:41.511843 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ddd8b55c8-kbtkg" podUID="792abee8-a81f-4cb1-9ede-47798a35f0b4" Nov 1 02:01:45.512097 kubelet[2679]: E1101 02:01:45.512064 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4r6nm" podUID="66d2f097-1517-44b9-891a-35d40c5f36ae" Nov 1 02:01:49.511824 kubelet[2679]: E1101 02:01:49.511788 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bm44l" podUID="6bd4bd36-d549-4194-a331-51709a095bb2" Nov 1 02:01:49.511824 kubelet[2679]: E1101 02:01:49.511819 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-d5g9p" podUID="381a5ea3-a9a9-42e2-8c3a-9c0b410afe13" Nov 1 02:01:51.514863 kubelet[2679]: E1101 02:01:51.514756 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b79df9786-ds9vj" podUID="aabe0a9d-10db-49d2-a1d8-2a8011591b5d" Nov 1 02:01:52.513702 kubelet[2679]: E1101 02:01:52.513608 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-msb77" podUID="66ab6902-4483-4337-8905-71710abec0d5" Nov 1 02:01:56.513190 kubelet[2679]: E1101 02:01:56.513048 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ddd8b55c8-kbtkg" podUID="792abee8-a81f-4cb1-9ede-47798a35f0b4" Nov 1 02:01:57.512749 kubelet[2679]: E1101 02:01:57.512682 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4r6nm" podUID="66d2f097-1517-44b9-891a-35d40c5f36ae" Nov 1 02:02:01.512481 kubelet[2679]: E1101 02:02:01.512452 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bm44l" podUID="6bd4bd36-d549-4194-a331-51709a095bb2" Nov 1 02:02:02.518160 kubelet[2679]: E1101 02:02:02.518040 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-d5g9p" podUID="381a5ea3-a9a9-42e2-8c3a-9c0b410afe13" Nov 1 02:02:02.519439 kubelet[2679]: E1101 02:02:02.518900 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b79df9786-ds9vj" podUID="aabe0a9d-10db-49d2-a1d8-2a8011591b5d" Nov 1 02:02:04.514915 kubelet[2679]: E1101 02:02:04.514788 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-msb77" podUID="66ab6902-4483-4337-8905-71710abec0d5" Nov 1 02:02:08.515212 kubelet[2679]: E1101 02:02:08.515061 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4r6nm" podUID="66d2f097-1517-44b9-891a-35d40c5f36ae" Nov 1 02:02:09.512404 kubelet[2679]: E1101 02:02:09.512368 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ddd8b55c8-kbtkg" podUID="792abee8-a81f-4cb1-9ede-47798a35f0b4" Nov 1 02:02:13.513125 kubelet[2679]: E1101 02:02:13.513025 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-d5g9p" podUID="381a5ea3-a9a9-42e2-8c3a-9c0b410afe13" Nov 1 02:02:16.519150 kubelet[2679]: E1101 02:02:16.519064 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bm44l" podUID="6bd4bd36-d549-4194-a331-51709a095bb2" Nov 1 02:02:16.520314 kubelet[2679]: E1101 02:02:16.519675 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b79df9786-ds9vj" podUID="aabe0a9d-10db-49d2-a1d8-2a8011591b5d" Nov 1 02:02:18.514130 kubelet[2679]: E1101 02:02:18.514031 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-msb77" podUID="66ab6902-4483-4337-8905-71710abec0d5" Nov 1 02:02:19.512317 kubelet[2679]: E1101 02:02:19.512291 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4r6nm" podUID="66d2f097-1517-44b9-891a-35d40c5f36ae" Nov 1 02:02:24.512569 kubelet[2679]: E1101 02:02:24.512537 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ddd8b55c8-kbtkg" podUID="792abee8-a81f-4cb1-9ede-47798a35f0b4" Nov 1 02:02:26.513496 kubelet[2679]: E1101 02:02:26.513359 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-d5g9p" podUID="381a5ea3-a9a9-42e2-8c3a-9c0b410afe13" Nov 1 02:02:31.514690 kubelet[2679]: E1101 02:02:31.514570 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bm44l" podUID="6bd4bd36-d549-4194-a331-51709a095bb2" Nov 1 02:02:31.516145 kubelet[2679]: E1101 02:02:31.515157 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b79df9786-ds9vj" podUID="aabe0a9d-10db-49d2-a1d8-2a8011591b5d" Nov 1 02:02:31.516145 kubelet[2679]: E1101 02:02:31.515209 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4r6nm" podUID="66d2f097-1517-44b9-891a-35d40c5f36ae" Nov 1 02:02:33.511917 kubelet[2679]: E1101 02:02:33.511850 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-msb77" podUID="66ab6902-4483-4337-8905-71710abec0d5" Nov 1 02:02:37.511797 kubelet[2679]: E1101 02:02:37.511751 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-d5g9p" podUID="381a5ea3-a9a9-42e2-8c3a-9c0b410afe13" Nov 1 02:02:39.513902 kubelet[2679]: E1101 02:02:39.513805 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ddd8b55c8-kbtkg" podUID="792abee8-a81f-4cb1-9ede-47798a35f0b4" Nov 1 02:02:44.514914 kubelet[2679]: E1101 02:02:44.514782 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-msb77" podUID="66ab6902-4483-4337-8905-71710abec0d5" Nov 1 02:02:45.512978 kubelet[2679]: E1101 02:02:45.512931 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4r6nm" podUID="66d2f097-1517-44b9-891a-35d40c5f36ae" Nov 1 02:02:46.512377 kubelet[2679]: E1101 02:02:46.512339 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bm44l" podUID="6bd4bd36-d549-4194-a331-51709a095bb2" Nov 1 02:02:46.512795 kubelet[2679]: E1101 02:02:46.512638 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b79df9786-ds9vj" podUID="aabe0a9d-10db-49d2-a1d8-2a8011591b5d" Nov 1 02:02:48.513692 kubelet[2679]: E1101 02:02:48.513593 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-d5g9p" podUID="381a5ea3-a9a9-42e2-8c3a-9c0b410afe13" Nov 1 02:02:54.516587 kubelet[2679]: E1101 02:02:54.516544 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ddd8b55c8-kbtkg" podUID="792abee8-a81f-4cb1-9ede-47798a35f0b4" Nov 1 02:02:56.513378 kubelet[2679]: E1101 02:02:56.513253 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-msb77" podUID="66ab6902-4483-4337-8905-71710abec0d5" Nov 1 02:02:57.511463 kubelet[2679]: E1101 02:02:57.511439 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bm44l" podUID="6bd4bd36-d549-4194-a331-51709a095bb2" Nov 1 02:02:58.515272 kubelet[2679]: E1101 02:02:58.515159 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4r6nm" podUID="66d2f097-1517-44b9-891a-35d40c5f36ae" Nov 1 02:03:01.514995 kubelet[2679]: E1101 02:03:01.514899 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b79df9786-ds9vj" podUID="aabe0a9d-10db-49d2-a1d8-2a8011591b5d" Nov 1 02:03:02.512105 kubelet[2679]: E1101 02:03:02.512029 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-d5g9p" podUID="381a5ea3-a9a9-42e2-8c3a-9c0b410afe13" Nov 1 02:03:09.512054 kubelet[2679]: E1101 02:03:09.512023 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bm44l" podUID="6bd4bd36-d549-4194-a331-51709a095bb2" Nov 1 02:03:09.512054 kubelet[2679]: E1101 02:03:09.512029 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ddd8b55c8-kbtkg" podUID="792abee8-a81f-4cb1-9ede-47798a35f0b4" Nov 1 02:03:10.513980 kubelet[2679]: E1101 02:03:10.513881 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-msb77" podUID="66ab6902-4483-4337-8905-71710abec0d5" Nov 1 02:03:10.515219 kubelet[2679]: E1101 02:03:10.514590 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4r6nm" podUID="66d2f097-1517-44b9-891a-35d40c5f36ae" Nov 1 02:03:12.515072 kubelet[2679]: E1101 02:03:12.514945 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b79df9786-ds9vj" podUID="aabe0a9d-10db-49d2-a1d8-2a8011591b5d" Nov 1 02:03:15.512119 kubelet[2679]: E1101 02:03:15.512056 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-d5g9p" podUID="381a5ea3-a9a9-42e2-8c3a-9c0b410afe13" Nov 1 02:03:20.513464 kubelet[2679]: E1101 02:03:20.513279 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ddd8b55c8-kbtkg" podUID="792abee8-a81f-4cb1-9ede-47798a35f0b4" Nov 1 02:03:20.513464 kubelet[2679]: E1101 02:03:20.513390 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bm44l" podUID="6bd4bd36-d549-4194-a331-51709a095bb2" Nov 1 02:03:21.512575 kubelet[2679]: E1101 02:03:21.512543 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4r6nm" podUID="66d2f097-1517-44b9-891a-35d40c5f36ae" Nov 1 02:03:24.041685 systemd[1]: Started sshd@10-139.178.90.71:22-147.75.109.163:45428.service. Nov 1 02:03:24.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.90.71:22-147.75.109.163:45428 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:03:24.126333 kernel: audit: type=1130 audit(1761962604.041:414): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.90.71:22-147.75.109.163:45428 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:03:24.154000 audit[6977]: USER_ACCT pid=6977 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:24.155158 sshd[6977]: Accepted publickey for core from 147.75.109.163 port 45428 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 02:03:24.160027 sshd[6977]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 02:03:24.172467 systemd-logind[1668]: New session 12 of user core. Nov 1 02:03:24.175343 systemd[1]: Started session-12.scope. Nov 1 02:03:24.158000 audit[6977]: CRED_ACQ pid=6977 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:24.261658 sshd[6977]: pam_unix(sshd:session): session closed for user core Nov 1 02:03:24.263019 systemd[1]: sshd@10-139.178.90.71:22-147.75.109.163:45428.service: Deactivated successfully. Nov 1 02:03:24.263655 systemd-logind[1668]: Session 12 logged out. Waiting for processes to exit. Nov 1 02:03:24.263661 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 02:03:24.264142 systemd-logind[1668]: Removed session 12. Nov 1 02:03:24.326896 kernel: audit: type=1101 audit(1761962604.154:415): pid=6977 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:24.326969 kernel: audit: type=1103 audit(1761962604.158:416): pid=6977 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:24.326990 kernel: audit: type=1006 audit(1761962604.158:417): pid=6977 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Nov 1 02:03:24.158000 audit[6977]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd5e90bf30 a2=3 a3=0 items=0 ppid=1 pid=6977 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 02:03:24.470346 kernel: audit: type=1300 audit(1761962604.158:417): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd5e90bf30 a2=3 a3=0 items=0 ppid=1 pid=6977 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 02:03:24.470408 kernel: audit: type=1327 audit(1761962604.158:417): proctitle=737368643A20636F7265205B707269765D Nov 1 02:03:24.158000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 02:03:24.499419 kernel: audit: type=1105 audit(1761962604.180:418): pid=6977 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:24.180000 audit[6977]: USER_START pid=6977 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:24.589853 kernel: audit: type=1103 audit(1761962604.181:419): pid=6980 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:24.181000 audit[6980]: CRED_ACQ pid=6980 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:24.676660 kernel: audit: type=1106 audit(1761962604.261:420): pid=6977 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:24.261000 audit[6977]: USER_END pid=6977 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:24.262000 audit[6977]: CRED_DISP pid=6977 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:24.863465 kernel: audit: type=1104 audit(1761962604.262:421): pid=6977 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:24.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.90.71:22-147.75.109.163:45428 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:03:25.512589 kubelet[2679]: E1101 02:03:25.512512 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-msb77" podUID="66ab6902-4483-4337-8905-71710abec0d5" Nov 1 02:03:25.513234 env[1679]: time="2025-11-01T02:03:25.512858437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 02:03:25.879323 env[1679]: time="2025-11-01T02:03:25.879059932Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:03:25.880136 env[1679]: time="2025-11-01T02:03:25.879977501Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 02:03:25.880584 kubelet[2679]: E1101 02:03:25.880445 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 02:03:25.880584 kubelet[2679]: E1101 02:03:25.880555 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 02:03:25.880939 kubelet[2679]: E1101 02:03:25.880839 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b3852adad46a4293a11e539c5e005d65,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pgwpl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b79df9786-ds9vj_calico-system(aabe0a9d-10db-49d2-a1d8-2a8011591b5d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 02:03:25.883899 env[1679]: time="2025-11-01T02:03:25.883805387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 02:03:26.243147 env[1679]: time="2025-11-01T02:03:26.242876722Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:03:26.244042 env[1679]: time="2025-11-01T02:03:26.243869358Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 02:03:26.244551 kubelet[2679]: E1101 02:03:26.244427 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 02:03:26.244551 kubelet[2679]: E1101 02:03:26.244541 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 02:03:26.244926 kubelet[2679]: E1101 02:03:26.244828 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pgwpl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b79df9786-ds9vj_calico-system(aabe0a9d-10db-49d2-a1d8-2a8011591b5d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 02:03:26.246359 kubelet[2679]: E1101 02:03:26.246232 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b79df9786-ds9vj" podUID="aabe0a9d-10db-49d2-a1d8-2a8011591b5d" Nov 1 02:03:28.513040 kubelet[2679]: E1101 02:03:28.513009 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-d5g9p" podUID="381a5ea3-a9a9-42e2-8c3a-9c0b410afe13" Nov 1 02:03:29.269739 systemd[1]: Started sshd@11-139.178.90.71:22-147.75.109.163:45444.service. Nov 1 02:03:29.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-139.178.90.71:22-147.75.109.163:45444 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:03:29.312160 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 02:03:29.312270 kernel: audit: type=1130 audit(1761962609.269:423): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-139.178.90.71:22-147.75.109.163:45444 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:03:29.342246 sshd[7010]: Accepted publickey for core from 147.75.109.163 port 45444 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 02:03:29.343689 sshd[7010]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 02:03:29.346076 systemd-logind[1668]: New session 13 of user core. Nov 1 02:03:29.346621 systemd[1]: Started session-13.scope. Nov 1 02:03:29.341000 audit[7010]: USER_ACCT pid=7010 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:29.424636 sshd[7010]: pam_unix(sshd:session): session closed for user core Nov 1 02:03:29.426045 systemd[1]: sshd@11-139.178.90.71:22-147.75.109.163:45444.service: Deactivated successfully. Nov 1 02:03:29.426685 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 02:03:29.426710 systemd-logind[1668]: Session 13 logged out. Waiting for processes to exit. Nov 1 02:03:29.427195 systemd-logind[1668]: Removed session 13. Nov 1 02:03:29.494535 kernel: audit: type=1101 audit(1761962609.341:424): pid=7010 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:29.494587 kernel: audit: type=1103 audit(1761962609.343:425): pid=7010 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:29.343000 audit[7010]: CRED_ACQ pid=7010 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:29.644970 kernel: audit: type=1006 audit(1761962609.343:426): pid=7010 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Nov 1 02:03:29.645041 kernel: audit: type=1300 audit(1761962609.343:426): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdcdb30d30 a2=3 a3=0 items=0 ppid=1 pid=7010 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 02:03:29.343000 audit[7010]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdcdb30d30 a2=3 a3=0 items=0 ppid=1 pid=7010 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 02:03:29.343000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 02:03:29.768484 kernel: audit: type=1327 audit(1761962609.343:426): proctitle=737368643A20636F7265205B707269765D Nov 1 02:03:29.768606 kernel: audit: type=1105 audit(1761962609.348:427): pid=7010 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:29.348000 audit[7010]: USER_START pid=7010 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:29.349000 audit[7013]: CRED_ACQ pid=7013 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:29.864397 kernel: audit: type=1103 audit(1761962609.349:428): pid=7013 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:29.424000 audit[7010]: USER_END pid=7010 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:30.048491 kernel: audit: type=1106 audit(1761962609.424:429): pid=7010 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:29.425000 audit[7010]: CRED_DISP pid=7010 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:30.137800 kernel: audit: type=1104 audit(1761962609.425:430): pid=7010 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:29.425000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-139.178.90.71:22-147.75.109.163:45444 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:03:33.513821 kubelet[2679]: E1101 02:03:33.513686 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ddd8b55c8-kbtkg" podUID="792abee8-a81f-4cb1-9ede-47798a35f0b4" Nov 1 02:03:34.428227 systemd[1]: Started sshd@12-139.178.90.71:22-147.75.109.163:45238.service. Nov 1 02:03:34.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.90.71:22-147.75.109.163:45238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:03:34.454630 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 02:03:34.454699 kernel: audit: type=1130 audit(1761962614.428:432): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.90.71:22-147.75.109.163:45238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:03:34.512991 kubelet[2679]: E1101 02:03:34.512965 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4r6nm" podUID="66d2f097-1517-44b9-891a-35d40c5f36ae" Nov 1 02:03:34.568000 audit[7039]: USER_ACCT pid=7039 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:34.569262 sshd[7039]: Accepted publickey for core from 147.75.109.163 port 45238 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 02:03:34.570664 sshd[7039]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 02:03:34.573269 systemd-logind[1668]: New session 14 of user core. Nov 1 02:03:34.573783 systemd[1]: Started session-14.scope. Nov 1 02:03:34.570000 audit[7039]: CRED_ACQ pid=7039 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:34.661724 sshd[7039]: pam_unix(sshd:session): session closed for user core Nov 1 02:03:34.663407 systemd[1]: Started sshd@13-139.178.90.71:22-147.75.109.163:45242.service. Nov 1 02:03:34.663763 systemd[1]: sshd@12-139.178.90.71:22-147.75.109.163:45238.service: Deactivated successfully. Nov 1 02:03:34.664313 systemd-logind[1668]: Session 14 logged out. Waiting for processes to exit. Nov 1 02:03:34.664357 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 02:03:34.665052 systemd-logind[1668]: Removed session 14. Nov 1 02:03:34.750850 kernel: audit: type=1101 audit(1761962614.568:433): pid=7039 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:34.750932 kernel: audit: type=1103 audit(1761962614.570:434): pid=7039 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:34.750956 kernel: audit: type=1006 audit(1761962614.570:435): pid=7039 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Nov 1 02:03:34.570000 audit[7039]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffff2319e10 a2=3 a3=0 items=0 ppid=1 pid=7039 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 02:03:34.810336 kernel: audit: type=1300 audit(1761962614.570:435): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffff2319e10 a2=3 a3=0 items=0 ppid=1 pid=7039 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 02:03:34.836130 sshd[7063]: Accepted publickey for core from 147.75.109.163 port 45242 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 02:03:34.837652 sshd[7063]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 02:03:34.839824 systemd-logind[1668]: New session 15 of user core. Nov 1 02:03:34.840370 systemd[1]: Started session-15.scope. Nov 1 02:03:34.570000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 02:03:34.902333 kernel: audit: type=1327 audit(1761962614.570:435): proctitle=737368643A20636F7265205B707269765D Nov 1 02:03:34.575000 audit[7039]: USER_START pid=7039 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:34.936533 sshd[7063]: pam_unix(sshd:session): session closed for user core Nov 1 02:03:34.938138 systemd[1]: Started sshd@14-139.178.90.71:22-147.75.109.163:45254.service. Nov 1 02:03:34.938538 systemd[1]: sshd@13-139.178.90.71:22-147.75.109.163:45242.service: Deactivated successfully. Nov 1 02:03:34.939201 systemd-logind[1668]: Session 15 logged out. Waiting for processes to exit. Nov 1 02:03:34.939235 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 02:03:34.939681 systemd-logind[1668]: Removed session 15. Nov 1 02:03:35.026401 kernel: audit: type=1105 audit(1761962614.575:436): pid=7039 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:35.026479 kernel: audit: type=1103 audit(1761962614.576:437): pid=7042 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:34.576000 audit[7042]: CRED_ACQ pid=7042 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:35.115546 kernel: audit: type=1106 audit(1761962614.662:438): pid=7039 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:34.662000 audit[7039]: USER_END pid=7039 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:35.142626 sshd[7088]: Accepted publickey for core from 147.75.109.163 port 45254 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 02:03:35.144217 sshd[7088]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 02:03:35.146510 systemd-logind[1668]: New session 16 of user core. Nov 1 02:03:35.146965 systemd[1]: Started session-16.scope. Nov 1 02:03:34.662000 audit[7039]: CRED_DISP pid=7039 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:35.211399 kernel: audit: type=1104 audit(1761962614.662:439): pid=7039 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:35.225722 sshd[7088]: pam_unix(sshd:session): session closed for user core Nov 1 02:03:35.227104 systemd[1]: sshd@14-139.178.90.71:22-147.75.109.163:45254.service: Deactivated successfully. Nov 1 02:03:35.227728 systemd-logind[1668]: Session 16 logged out. Waiting for processes to exit. Nov 1 02:03:35.227743 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 02:03:35.228195 systemd-logind[1668]: Removed session 16. Nov 1 02:03:34.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.90.71:22-147.75.109.163:45242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:03:34.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.90.71:22-147.75.109.163:45238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:03:34.835000 audit[7063]: USER_ACCT pid=7063 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:34.837000 audit[7063]: CRED_ACQ pid=7063 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:34.837000 audit[7063]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcec5ad450 a2=3 a3=0 items=0 ppid=1 pid=7063 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 02:03:34.837000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 02:03:34.842000 audit[7063]: USER_START pid=7063 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:34.842000 audit[7067]: CRED_ACQ pid=7067 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:34.936000 audit[7063]: USER_END pid=7063 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:34.936000 audit[7063]: CRED_DISP pid=7063 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:34.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-139.178.90.71:22-147.75.109.163:45254 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:03:34.938000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.90.71:22-147.75.109.163:45242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:03:35.142000 audit[7088]: USER_ACCT pid=7088 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:35.143000 audit[7088]: CRED_ACQ pid=7088 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:35.143000 audit[7088]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe6cc8b6d0 a2=3 a3=0 items=0 ppid=1 pid=7088 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 02:03:35.143000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 02:03:35.149000 audit[7088]: USER_START pid=7088 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:35.149000 audit[7092]: CRED_ACQ pid=7092 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:35.226000 audit[7088]: USER_END pid=7088 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:35.226000 audit[7088]: CRED_DISP pid=7088 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:35.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-139.178.90.71:22-147.75.109.163:45254 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:03:35.513693 env[1679]: time="2025-11-01T02:03:35.513613109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 02:03:35.846553 env[1679]: time="2025-11-01T02:03:35.846424674Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:03:35.847305 env[1679]: time="2025-11-01T02:03:35.847214142Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 02:03:35.847831 kubelet[2679]: E1101 02:03:35.847749 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 02:03:35.849025 kubelet[2679]: E1101 02:03:35.847858 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 02:03:35.849025 kubelet[2679]: E1101 02:03:35.848168 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-45r66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-bm44l_calico-system(6bd4bd36-d549-4194-a331-51709a095bb2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 02:03:35.849622 kubelet[2679]: E1101 02:03:35.849562 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bm44l" podUID="6bd4bd36-d549-4194-a331-51709a095bb2" Nov 1 02:03:38.515135 kubelet[2679]: E1101 02:03:38.515008 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b79df9786-ds9vj" podUID="aabe0a9d-10db-49d2-a1d8-2a8011591b5d" Nov 1 02:03:40.230297 systemd[1]: Started sshd@15-139.178.90.71:22-147.75.109.163:50630.service. Nov 1 02:03:40.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-139.178.90.71:22-147.75.109.163:50630 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:03:40.257076 kernel: kauditd_printk_skb: 23 callbacks suppressed Nov 1 02:03:40.257124 kernel: audit: type=1130 audit(1761962620.230:459): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-139.178.90.71:22-147.75.109.163:50630 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:03:40.372000 audit[7118]: USER_ACCT pid=7118 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:40.373390 sshd[7118]: Accepted publickey for core from 147.75.109.163 port 50630 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 02:03:40.374657 sshd[7118]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 02:03:40.377199 systemd-logind[1668]: New session 17 of user core. Nov 1 02:03:40.377662 systemd[1]: Started session-17.scope. Nov 1 02:03:40.455867 sshd[7118]: pam_unix(sshd:session): session closed for user core Nov 1 02:03:40.457157 systemd[1]: sshd@15-139.178.90.71:22-147.75.109.163:50630.service: Deactivated successfully. Nov 1 02:03:40.457789 systemd-logind[1668]: Session 17 logged out. Waiting for processes to exit. Nov 1 02:03:40.457825 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 02:03:40.458190 systemd-logind[1668]: Removed session 17. Nov 1 02:03:40.374000 audit[7118]: CRED_ACQ pid=7118 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:40.511496 env[1679]: time="2025-11-01T02:03:40.511476929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 02:03:40.555020 kernel: audit: type=1101 audit(1761962620.372:460): pid=7118 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:40.555070 kernel: audit: type=1103 audit(1761962620.374:461): pid=7118 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:40.555085 kernel: audit: type=1006 audit(1761962620.374:462): pid=7118 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Nov 1 02:03:40.374000 audit[7118]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd3211afb0 a2=3 a3=0 items=0 ppid=1 pid=7118 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 02:03:40.705691 kernel: audit: type=1300 audit(1761962620.374:462): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd3211afb0 a2=3 a3=0 items=0 ppid=1 pid=7118 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 02:03:40.705775 kernel: audit: type=1327 audit(1761962620.374:462): proctitle=737368643A20636F7265205B707269765D Nov 1 02:03:40.374000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 02:03:40.736175 kernel: audit: type=1105 audit(1761962620.379:463): pid=7118 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:40.379000 audit[7118]: USER_START pid=7118 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:40.380000 audit[7121]: CRED_ACQ pid=7121 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:40.884914 env[1679]: time="2025-11-01T02:03:40.884854199Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:03:40.885329 env[1679]: time="2025-11-01T02:03:40.885276184Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 02:03:40.885492 kubelet[2679]: E1101 02:03:40.885435 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 02:03:40.885492 kubelet[2679]: E1101 02:03:40.885466 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 02:03:40.885681 kubelet[2679]: E1101 02:03:40.885543 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n9792,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-fbf49c57b-msb77_calico-apiserver(66ab6902-4483-4337-8905-71710abec0d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 02:03:40.886748 kubelet[2679]: E1101 02:03:40.886703 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-msb77" podUID="66ab6902-4483-4337-8905-71710abec0d5" Nov 1 02:03:40.919833 kernel: audit: type=1103 audit(1761962620.380:464): pid=7121 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:40.919899 kernel: audit: type=1106 audit(1761962620.456:465): pid=7118 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:40.456000 audit[7118]: USER_END pid=7118 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:40.456000 audit[7118]: CRED_DISP pid=7118 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:41.104583 kernel: audit: type=1104 audit(1761962620.456:466): pid=7118 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:40.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-139.178.90.71:22-147.75.109.163:50630 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:03:41.513738 env[1679]: time="2025-11-01T02:03:41.513306915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 02:03:41.860060 env[1679]: time="2025-11-01T02:03:41.859819012Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:03:41.860837 env[1679]: time="2025-11-01T02:03:41.860699861Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 02:03:41.861186 kubelet[2679]: E1101 02:03:41.861077 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 02:03:41.861186 kubelet[2679]: E1101 02:03:41.861174 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 02:03:41.861591 kubelet[2679]: E1101 02:03:41.861450 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-shhx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-fbf49c57b-d5g9p_calico-apiserver(381a5ea3-a9a9-42e2-8c3a-9c0b410afe13): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 02:03:41.862912 kubelet[2679]: E1101 02:03:41.862804 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-d5g9p" podUID="381a5ea3-a9a9-42e2-8c3a-9c0b410afe13" Nov 1 02:03:45.462995 systemd[1]: Started sshd@16-139.178.90.71:22-147.75.109.163:50634.service. Nov 1 02:03:45.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-139.178.90.71:22-147.75.109.163:50634 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:03:45.490044 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 02:03:45.490110 kernel: audit: type=1130 audit(1761962625.462:468): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-139.178.90.71:22-147.75.109.163:50634 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:03:45.604000 audit[7147]: USER_ACCT pid=7147 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:45.605471 sshd[7147]: Accepted publickey for core from 147.75.109.163 port 50634 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 02:03:45.606459 sshd[7147]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 02:03:45.608843 systemd-logind[1668]: New session 18 of user core. Nov 1 02:03:45.609352 systemd[1]: Started session-18.scope. Nov 1 02:03:45.695396 sshd[7147]: pam_unix(sshd:session): session closed for user core Nov 1 02:03:45.605000 audit[7147]: CRED_ACQ pid=7147 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:45.696916 systemd[1]: sshd@16-139.178.90.71:22-147.75.109.163:50634.service: Deactivated successfully. Nov 1 02:03:45.697598 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 02:03:45.697639 systemd-logind[1668]: Session 18 logged out. Waiting for processes to exit. Nov 1 02:03:45.698136 systemd-logind[1668]: Removed session 18. Nov 1 02:03:45.787199 kernel: audit: type=1101 audit(1761962625.604:469): pid=7147 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:45.787288 kernel: audit: type=1103 audit(1761962625.605:470): pid=7147 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:45.787309 kernel: audit: type=1006 audit(1761962625.605:471): pid=7147 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Nov 1 02:03:45.845777 kernel: audit: type=1300 audit(1761962625.605:471): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffda6d80500 a2=3 a3=0 items=0 ppid=1 pid=7147 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 02:03:45.605000 audit[7147]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffda6d80500 a2=3 a3=0 items=0 ppid=1 pid=7147 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 02:03:45.937796 kernel: audit: type=1327 audit(1761962625.605:471): proctitle=737368643A20636F7265205B707269765D Nov 1 02:03:45.605000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 02:03:45.610000 audit[7147]: USER_START pid=7147 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:46.062790 kernel: audit: type=1105 audit(1761962625.610:472): pid=7147 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:46.062832 kernel: audit: type=1103 audit(1761962625.611:473): pid=7150 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:45.611000 audit[7150]: CRED_ACQ pid=7150 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:45.695000 audit[7147]: USER_END pid=7147 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:46.247411 kernel: audit: type=1106 audit(1761962625.695:474): pid=7147 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:45.695000 audit[7147]: CRED_DISP pid=7147 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:46.336703 kernel: audit: type=1104 audit(1761962625.695:475): pid=7147 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:45.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-139.178.90.71:22-147.75.109.163:50634 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:03:47.513905 env[1679]: time="2025-11-01T02:03:47.513659517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 02:03:47.847284 env[1679]: time="2025-11-01T02:03:47.847013044Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:03:47.848018 env[1679]: time="2025-11-01T02:03:47.847899267Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 02:03:47.848475 kubelet[2679]: E1101 02:03:47.848381 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 02:03:47.849681 kubelet[2679]: E1101 02:03:47.848499 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 02:03:47.849681 kubelet[2679]: E1101 02:03:47.848945 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rk5mm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5ddd8b55c8-kbtkg_calico-system(792abee8-a81f-4cb1-9ede-47798a35f0b4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 02:03:47.850482 env[1679]: time="2025-11-01T02:03:47.849228159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 02:03:47.850786 kubelet[2679]: E1101 02:03:47.850405 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ddd8b55c8-kbtkg" podUID="792abee8-a81f-4cb1-9ede-47798a35f0b4" Nov 1 02:03:48.189907 env[1679]: time="2025-11-01T02:03:48.189765097Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:03:48.190414 env[1679]: time="2025-11-01T02:03:48.190360626Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 02:03:48.190683 kubelet[2679]: E1101 02:03:48.190608 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 02:03:48.190683 kubelet[2679]: E1101 02:03:48.190663 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 02:03:48.190859 kubelet[2679]: E1101 02:03:48.190786 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wwdwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4r6nm_calico-system(66d2f097-1517-44b9-891a-35d40c5f36ae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 02:03:48.192803 env[1679]: time="2025-11-01T02:03:48.192736979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 02:03:48.513415 kubelet[2679]: E1101 02:03:48.513314 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bm44l" podUID="6bd4bd36-d549-4194-a331-51709a095bb2" Nov 1 02:03:48.524233 env[1679]: time="2025-11-01T02:03:48.524114137Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:03:48.525185 env[1679]: time="2025-11-01T02:03:48.525060251Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 02:03:48.525574 kubelet[2679]: E1101 02:03:48.525468 2679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 02:03:48.525574 kubelet[2679]: E1101 02:03:48.525556 2679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 02:03:48.525929 kubelet[2679]: E1101 02:03:48.525798 2679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wwdwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4r6nm_calico-system(66d2f097-1517-44b9-891a-35d40c5f36ae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 02:03:48.527305 kubelet[2679]: E1101 02:03:48.527166 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4r6nm" podUID="66d2f097-1517-44b9-891a-35d40c5f36ae" Nov 1 02:03:50.514685 kubelet[2679]: E1101 02:03:50.514648 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b79df9786-ds9vj" podUID="aabe0a9d-10db-49d2-a1d8-2a8011591b5d" Nov 1 02:03:50.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-139.178.90.71:22-147.75.109.163:53228 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:03:50.698899 systemd[1]: Started sshd@17-139.178.90.71:22-147.75.109.163:53228.service. Nov 1 02:03:50.725577 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 02:03:50.725700 kernel: audit: type=1130 audit(1761962630.698:477): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-139.178.90.71:22-147.75.109.163:53228 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:03:50.842000 audit[7195]: USER_ACCT pid=7195 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:50.842852 sshd[7195]: Accepted publickey for core from 147.75.109.163 port 53228 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 02:03:50.844118 sshd[7195]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 02:03:50.846488 systemd-logind[1668]: New session 19 of user core. Nov 1 02:03:50.847013 systemd[1]: Started session-19.scope. Nov 1 02:03:50.927020 sshd[7195]: pam_unix(sshd:session): session closed for user core Nov 1 02:03:50.928408 systemd[1]: sshd@17-139.178.90.71:22-147.75.109.163:53228.service: Deactivated successfully. Nov 1 02:03:50.929071 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 02:03:50.929072 systemd-logind[1668]: Session 19 logged out. Waiting for processes to exit. Nov 1 02:03:50.929501 systemd-logind[1668]: Removed session 19. Nov 1 02:03:50.843000 audit[7195]: CRED_ACQ pid=7195 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:50.934333 kernel: audit: type=1101 audit(1761962630.842:478): pid=7195 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:50.934370 kernel: audit: type=1103 audit(1761962630.843:479): pid=7195 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:51.083288 kernel: audit: type=1006 audit(1761962630.843:480): pid=7195 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Nov 1 02:03:51.083351 kernel: audit: type=1300 audit(1761962630.843:480): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd3695fbe0 a2=3 a3=0 items=0 ppid=1 pid=7195 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 02:03:50.843000 audit[7195]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd3695fbe0 a2=3 a3=0 items=0 ppid=1 pid=7195 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 02:03:50.843000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 02:03:51.205772 kernel: audit: type=1327 audit(1761962630.843:480): proctitle=737368643A20636F7265205B707269765D Nov 1 02:03:51.205803 kernel: audit: type=1105 audit(1761962630.848:481): pid=7195 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:50.848000 audit[7195]: USER_START pid=7195 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:51.300194 kernel: audit: type=1103 audit(1761962630.849:482): pid=7209 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:50.849000 audit[7209]: CRED_ACQ pid=7209 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:51.389411 kernel: audit: type=1106 audit(1761962630.927:483): pid=7195 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:50.927000 audit[7195]: USER_END pid=7195 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:51.484882 kernel: audit: type=1104 audit(1761962630.927:484): pid=7195 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:50.927000 audit[7195]: CRED_DISP pid=7195 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:50.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-139.178.90.71:22-147.75.109.163:53228 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:03:54.514941 kubelet[2679]: E1101 02:03:54.514836 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-msb77" podUID="66ab6902-4483-4337-8905-71710abec0d5" Nov 1 02:03:55.930985 systemd[1]: Started sshd@18-139.178.90.71:22-147.75.109.163:53238.service. Nov 1 02:03:55.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-139.178.90.71:22-147.75.109.163:53238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:03:55.957382 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 02:03:55.957472 kernel: audit: type=1130 audit(1761962635.930:486): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-139.178.90.71:22-147.75.109.163:53238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:03:56.071000 audit[7232]: USER_ACCT pid=7232 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:56.072906 sshd[7232]: Accepted publickey for core from 147.75.109.163 port 53238 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 02:03:56.073651 sshd[7232]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 02:03:56.076559 systemd-logind[1668]: New session 20 of user core. Nov 1 02:03:56.077631 systemd[1]: Started session-20.scope. Nov 1 02:03:56.157250 sshd[7232]: pam_unix(sshd:session): session closed for user core Nov 1 02:03:56.159390 systemd[1]: sshd@18-139.178.90.71:22-147.75.109.163:53238.service: Deactivated successfully. Nov 1 02:03:56.160322 systemd-logind[1668]: Session 20 logged out. Waiting for processes to exit. Nov 1 02:03:56.161863 systemd[1]: Started sshd@19-139.178.90.71:22-147.75.109.163:53244.service. Nov 1 02:03:56.162471 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 02:03:56.163182 systemd-logind[1668]: Removed session 20. Nov 1 02:03:56.072000 audit[7232]: CRED_ACQ pid=7232 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:56.254581 kernel: audit: type=1101 audit(1761962636.071:487): pid=7232 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:56.254637 kernel: audit: type=1103 audit(1761962636.072:488): pid=7232 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:56.254657 kernel: audit: type=1006 audit(1761962636.072:489): pid=7232 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Nov 1 02:03:56.313108 kernel: audit: type=1300 audit(1761962636.072:489): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeb24c2930 a2=3 a3=0 items=0 ppid=1 pid=7232 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 02:03:56.072000 audit[7232]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeb24c2930 a2=3 a3=0 items=0 ppid=1 pid=7232 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 02:03:56.340231 sshd[7258]: Accepted publickey for core from 147.75.109.163 port 53244 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 02:03:56.341391 sshd[7258]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 02:03:56.343988 systemd-logind[1668]: New session 21 of user core. Nov 1 02:03:56.344487 systemd[1]: Started session-21.scope. Nov 1 02:03:56.072000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 02:03:56.405379 kernel: audit: type=1327 audit(1761962636.072:489): proctitle=737368643A20636F7265205B707269765D Nov 1 02:03:56.434813 sshd[7258]: pam_unix(sshd:session): session closed for user core Nov 1 02:03:56.078000 audit[7232]: USER_START pid=7232 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:56.436341 kernel: audit: type=1105 audit(1761962636.078:490): pid=7232 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:56.437147 systemd[1]: Started sshd@20-139.178.90.71:22-147.75.109.163:53252.service. Nov 1 02:03:56.437908 systemd[1]: sshd@19-139.178.90.71:22-147.75.109.163:53244.service: Deactivated successfully. Nov 1 02:03:56.438666 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 02:03:56.439162 systemd-logind[1668]: Session 21 logged out. Waiting for processes to exit. Nov 1 02:03:56.439624 systemd-logind[1668]: Removed session 21. Nov 1 02:03:56.511484 kubelet[2679]: E1101 02:03:56.511420 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-d5g9p" podUID="381a5ea3-a9a9-42e2-8c3a-9c0b410afe13" Nov 1 02:03:56.079000 audit[7235]: CRED_ACQ pid=7235 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:56.619063 kernel: audit: type=1103 audit(1761962636.079:491): pid=7235 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:56.619125 kernel: audit: type=1106 audit(1761962636.156:492): pid=7232 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:56.156000 audit[7232]: USER_END pid=7232 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:56.646045 sshd[7281]: Accepted publickey for core from 147.75.109.163 port 53252 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 02:03:56.647097 sshd[7281]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 02:03:56.649367 systemd-logind[1668]: New session 22 of user core. Nov 1 02:03:56.649987 systemd[1]: Started session-22.scope. Nov 1 02:03:56.714453 kernel: audit: type=1104 audit(1761962636.156:493): pid=7232 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:56.156000 audit[7232]: CRED_DISP pid=7232 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:56.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-139.178.90.71:22-147.75.109.163:53238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:03:56.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-139.178.90.71:22-147.75.109.163:53244 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:03:56.338000 audit[7258]: USER_ACCT pid=7258 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:56.339000 audit[7258]: CRED_ACQ pid=7258 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:56.339000 audit[7258]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc35d6b2b0 a2=3 a3=0 items=0 ppid=1 pid=7258 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 02:03:56.339000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 02:03:56.345000 audit[7258]: USER_START pid=7258 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:56.345000 audit[7261]: CRED_ACQ pid=7261 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:56.434000 audit[7258]: USER_END pid=7258 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:56.434000 audit[7258]: CRED_DISP pid=7258 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:56.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-139.178.90.71:22-147.75.109.163:53252 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:03:56.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-139.178.90.71:22-147.75.109.163:53244 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:03:56.644000 audit[7281]: USER_ACCT pid=7281 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:56.645000 audit[7281]: CRED_ACQ pid=7281 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:56.645000 audit[7281]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff39ac2290 a2=3 a3=0 items=0 ppid=1 pid=7281 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 02:03:56.645000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 02:03:56.650000 audit[7281]: USER_START pid=7281 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:56.651000 audit[7285]: CRED_ACQ pid=7285 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:57.467000 audit[7308]: NETFILTER_CFG table=filter:130 family=2 entries=26 op=nft_register_rule pid=7308 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 02:03:57.467000 audit[7308]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe52932f70 a2=0 a3=7ffe52932f5c items=0 ppid=2820 pid=7308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 02:03:57.467000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 02:03:57.477711 sshd[7281]: pam_unix(sshd:session): session closed for user core Nov 1 02:03:57.477000 audit[7281]: USER_END pid=7281 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:57.477000 audit[7281]: CRED_DISP pid=7281 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:57.477000 audit[7308]: NETFILTER_CFG table=nat:131 family=2 entries=20 op=nft_register_rule pid=7308 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 02:03:57.477000 audit[7308]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe52932f70 a2=0 a3=0 items=0 ppid=2820 pid=7308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 02:03:57.477000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 02:03:57.479563 systemd[1]: Started sshd@21-139.178.90.71:22-147.75.109.163:53264.service. Nov 1 02:03:57.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-139.178.90.71:22-147.75.109.163:53264 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:03:57.480051 systemd[1]: sshd@20-139.178.90.71:22-147.75.109.163:53252.service: Deactivated successfully. Nov 1 02:03:57.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-139.178.90.71:22-147.75.109.163:53252 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:03:57.480897 systemd-logind[1668]: Session 22 logged out. Waiting for processes to exit. Nov 1 02:03:57.480942 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 02:03:57.481505 systemd-logind[1668]: Removed session 22. Nov 1 02:03:57.492000 audit[7316]: NETFILTER_CFG table=filter:132 family=2 entries=38 op=nft_register_rule pid=7316 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 02:03:57.492000 audit[7316]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff3c8c1620 a2=0 a3=7fff3c8c160c items=0 ppid=2820 pid=7316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 02:03:57.492000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 02:03:57.506000 audit[7316]: NETFILTER_CFG table=nat:133 family=2 entries=20 op=nft_register_rule pid=7316 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 02:03:57.506000 audit[7316]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff3c8c1620 a2=0 a3=0 items=0 ppid=2820 pid=7316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 02:03:57.506000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 02:03:57.513000 audit[7312]: USER_ACCT pid=7312 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:57.514870 sshd[7312]: Accepted publickey for core from 147.75.109.163 port 53264 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 02:03:57.515000 audit[7312]: CRED_ACQ pid=7312 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:57.515000 audit[7312]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff4de6c8c0 a2=3 a3=0 items=0 ppid=1 pid=7312 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 02:03:57.515000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 02:03:57.515828 sshd[7312]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 02:03:57.519069 systemd-logind[1668]: New session 23 of user core. Nov 1 02:03:57.519612 systemd[1]: Started session-23.scope. Nov 1 02:03:57.520000 audit[7312]: USER_START pid=7312 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:57.521000 audit[7319]: CRED_ACQ pid=7319 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:57.672931 sshd[7312]: pam_unix(sshd:session): session closed for user core Nov 1 02:03:57.672000 audit[7312]: USER_END pid=7312 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:57.672000 audit[7312]: CRED_DISP pid=7312 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:57.674442 systemd[1]: Started sshd@22-139.178.90.71:22-147.75.109.163:53274.service. Nov 1 02:03:57.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-139.178.90.71:22-147.75.109.163:53274 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:03:57.674735 systemd[1]: sshd@21-139.178.90.71:22-147.75.109.163:53264.service: Deactivated successfully. Nov 1 02:03:57.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-139.178.90.71:22-147.75.109.163:53264 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:03:57.675291 systemd-logind[1668]: Session 23 logged out. Waiting for processes to exit. Nov 1 02:03:57.675332 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 02:03:57.675780 systemd-logind[1668]: Removed session 23. Nov 1 02:03:57.706000 audit[7339]: USER_ACCT pid=7339 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:57.707358 sshd[7339]: Accepted publickey for core from 147.75.109.163 port 53274 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 02:03:57.707000 audit[7339]: CRED_ACQ pid=7339 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:57.707000 audit[7339]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffacdf0ea0 a2=3 a3=0 items=0 ppid=1 pid=7339 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 02:03:57.707000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 02:03:57.708276 sshd[7339]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 02:03:57.711403 systemd-logind[1668]: New session 24 of user core. Nov 1 02:03:57.712158 systemd[1]: Started session-24.scope. Nov 1 02:03:57.713000 audit[7339]: USER_START pid=7339 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:57.714000 audit[7344]: CRED_ACQ pid=7344 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:57.825762 sshd[7339]: pam_unix(sshd:session): session closed for user core Nov 1 02:03:57.825000 audit[7339]: USER_END pid=7339 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:57.825000 audit[7339]: CRED_DISP pid=7339 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:03:57.827100 systemd[1]: sshd@22-139.178.90.71:22-147.75.109.163:53274.service: Deactivated successfully. Nov 1 02:03:57.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-139.178.90.71:22-147.75.109.163:53274 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:03:57.827712 systemd-logind[1668]: Session 24 logged out. Waiting for processes to exit. Nov 1 02:03:57.827757 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 02:03:57.828253 systemd-logind[1668]: Removed session 24. Nov 1 02:03:58.512239 kubelet[2679]: E1101 02:03:58.512211 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ddd8b55c8-kbtkg" podUID="792abee8-a81f-4cb1-9ede-47798a35f0b4" Nov 1 02:03:59.515002 kubelet[2679]: E1101 02:03:59.514860 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4r6nm" podUID="66d2f097-1517-44b9-891a-35d40c5f36ae" Nov 1 02:04:00.512709 kubelet[2679]: E1101 02:04:00.512633 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bm44l" podUID="6bd4bd36-d549-4194-a331-51709a095bb2" Nov 1 02:04:01.796000 audit[7369]: NETFILTER_CFG table=filter:134 family=2 entries=26 op=nft_register_rule pid=7369 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 02:04:01.824205 kernel: kauditd_printk_skb: 57 callbacks suppressed Nov 1 02:04:01.824290 kernel: audit: type=1325 audit(1761962641.796:535): table=filter:134 family=2 entries=26 op=nft_register_rule pid=7369 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 02:04:01.796000 audit[7369]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc1d75e220 a2=0 a3=7ffc1d75e20c items=0 ppid=2820 pid=7369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 02:04:01.884339 kernel: audit: type=1300 audit(1761962641.796:535): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc1d75e220 a2=0 a3=7ffc1d75e20c items=0 ppid=2820 pid=7369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 02:04:01.796000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 02:04:01.980392 kernel: audit: type=1327 audit(1761962641.796:535): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 02:04:01.980000 audit[7369]: NETFILTER_CFG table=nat:135 family=2 entries=104 op=nft_register_chain pid=7369 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 02:04:02.096589 kernel: audit: type=1325 audit(1761962641.980:536): table=nat:135 family=2 entries=104 op=nft_register_chain pid=7369 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 02:04:02.096664 kernel: audit: type=1300 audit(1761962641.980:536): arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffc1d75e220 a2=0 a3=7ffc1d75e20c items=0 ppid=2820 pid=7369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 02:04:01.980000 audit[7369]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffc1d75e220 a2=0 a3=7ffc1d75e20c items=0 ppid=2820 pid=7369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 02:04:02.193814 kernel: audit: type=1327 audit(1761962641.980:536): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 02:04:01.980000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 02:04:02.514636 kubelet[2679]: E1101 02:04:02.514506 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b79df9786-ds9vj" podUID="aabe0a9d-10db-49d2-a1d8-2a8011591b5d" Nov 1 02:04:02.833407 systemd[1]: Started sshd@23-139.178.90.71:22-147.75.109.163:43628.service. Nov 1 02:04:02.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-139.178.90.71:22-147.75.109.163:43628 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:04:02.927379 kernel: audit: type=1130 audit(1761962642.832:537): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-139.178.90.71:22-147.75.109.163:43628 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:04:02.953878 sshd[7370]: Accepted publickey for core from 147.75.109.163 port 43628 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 02:04:02.953000 audit[7370]: USER_ACCT pid=7370 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:04:02.956677 sshd[7370]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 02:04:02.959085 systemd-logind[1668]: New session 25 of user core. Nov 1 02:04:02.959705 systemd[1]: Started session-25.scope. Nov 1 02:04:03.035438 sshd[7370]: pam_unix(sshd:session): session closed for user core Nov 1 02:04:03.036924 systemd[1]: sshd@23-139.178.90.71:22-147.75.109.163:43628.service: Deactivated successfully. Nov 1 02:04:03.037590 systemd[1]: session-25.scope: Deactivated successfully. Nov 1 02:04:03.037612 systemd-logind[1668]: Session 25 logged out. Waiting for processes to exit. Nov 1 02:04:03.038092 systemd-logind[1668]: Removed session 25. Nov 1 02:04:02.955000 audit[7370]: CRED_ACQ pid=7370 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:04:03.137756 kernel: audit: type=1101 audit(1761962642.953:538): pid=7370 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:04:03.137826 kernel: audit: type=1103 audit(1761962642.955:539): pid=7370 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:04:03.137841 kernel: audit: type=1006 audit(1761962642.955:540): pid=7370 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Nov 1 02:04:02.955000 audit[7370]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff0af7f850 a2=3 a3=0 items=0 ppid=1 pid=7370 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 02:04:02.955000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 02:04:02.960000 audit[7370]: USER_START pid=7370 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:04:02.961000 audit[7373]: CRED_ACQ pid=7373 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:04:03.034000 audit[7370]: USER_END pid=7370 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:04:03.034000 audit[7370]: CRED_DISP pid=7370 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:04:03.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-139.178.90.71:22-147.75.109.163:43628 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:04:07.513442 kubelet[2679]: E1101 02:04:07.513288 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-d5g9p" podUID="381a5ea3-a9a9-42e2-8c3a-9c0b410afe13" Nov 1 02:04:08.038291 systemd[1]: Started sshd@24-139.178.90.71:22-147.75.109.163:43642.service. Nov 1 02:04:08.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-139.178.90.71:22-147.75.109.163:43642 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:04:08.064924 kernel: kauditd_printk_skb: 7 callbacks suppressed Nov 1 02:04:08.065004 kernel: audit: type=1130 audit(1761962648.037:546): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-139.178.90.71:22-147.75.109.163:43642 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:04:08.180668 sshd[7412]: Accepted publickey for core from 147.75.109.163 port 43642 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 02:04:08.179000 audit[7412]: USER_ACCT pid=7412 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:04:08.181664 sshd[7412]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 02:04:08.184054 systemd-logind[1668]: New session 26 of user core. Nov 1 02:04:08.184631 systemd[1]: Started session-26.scope. Nov 1 02:04:08.271729 sshd[7412]: pam_unix(sshd:session): session closed for user core Nov 1 02:04:08.180000 audit[7412]: CRED_ACQ pid=7412 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:04:08.273187 systemd[1]: sshd@24-139.178.90.71:22-147.75.109.163:43642.service: Deactivated successfully. Nov 1 02:04:08.273819 systemd-logind[1668]: Session 26 logged out. Waiting for processes to exit. Nov 1 02:04:08.273867 systemd[1]: session-26.scope: Deactivated successfully. Nov 1 02:04:08.274272 systemd-logind[1668]: Removed session 26. Nov 1 02:04:08.363442 kernel: audit: type=1101 audit(1761962648.179:547): pid=7412 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:04:08.363517 kernel: audit: type=1103 audit(1761962648.180:548): pid=7412 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:04:08.363540 kernel: audit: type=1006 audit(1761962648.180:549): pid=7412 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Nov 1 02:04:08.180000 audit[7412]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc3ff242c0 a2=3 a3=0 items=0 ppid=1 pid=7412 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 02:04:08.515201 kernel: audit: type=1300 audit(1761962648.180:549): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc3ff242c0 a2=3 a3=0 items=0 ppid=1 pid=7412 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 02:04:08.515266 kernel: audit: type=1327 audit(1761962648.180:549): proctitle=737368643A20636F7265205B707269765D Nov 1 02:04:08.180000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 02:04:08.545914 kernel: audit: type=1105 audit(1761962648.185:550): pid=7412 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:04:08.185000 audit[7412]: USER_START pid=7412 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:04:08.185000 audit[7415]: CRED_ACQ pid=7415 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:04:08.730282 kernel: audit: type=1103 audit(1761962648.185:551): pid=7415 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:04:08.730366 kernel: audit: type=1106 audit(1761962648.271:552): pid=7412 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:04:08.271000 audit[7412]: USER_END pid=7412 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:04:08.826475 kernel: audit: type=1104 audit(1761962648.271:553): pid=7412 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:04:08.271000 audit[7412]: CRED_DISP pid=7412 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:04:08.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-139.178.90.71:22-147.75.109.163:43642 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:04:09.512492 kubelet[2679]: E1101 02:04:09.512452 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-msb77" podUID="66ab6902-4483-4337-8905-71710abec0d5" Nov 1 02:04:10.513911 kubelet[2679]: E1101 02:04:10.513766 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ddd8b55c8-kbtkg" podUID="792abee8-a81f-4cb1-9ede-47798a35f0b4" Nov 1 02:04:13.280810 systemd[1]: Started sshd@25-139.178.90.71:22-147.75.109.163:50722.service. Nov 1 02:04:13.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-139.178.90.71:22-147.75.109.163:50722 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:04:13.308199 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 02:04:13.308299 kernel: audit: type=1130 audit(1761962653.279:555): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-139.178.90.71:22-147.75.109.163:50722 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:04:13.424000 audit[7445]: USER_ACCT pid=7445 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:04:13.425238 sshd[7445]: Accepted publickey for core from 147.75.109.163 port 50722 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 02:04:13.426683 sshd[7445]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 02:04:13.428916 systemd-logind[1668]: New session 27 of user core. Nov 1 02:04:13.429447 systemd[1]: Started session-27.scope. Nov 1 02:04:13.509118 sshd[7445]: pam_unix(sshd:session): session closed for user core Nov 1 02:04:13.510375 systemd[1]: sshd@25-139.178.90.71:22-147.75.109.163:50722.service: Deactivated successfully. Nov 1 02:04:13.510981 systemd-logind[1668]: Session 27 logged out. Waiting for processes to exit. Nov 1 02:04:13.510989 systemd[1]: session-27.scope: Deactivated successfully. Nov 1 02:04:13.511413 systemd-logind[1668]: Removed session 27. Nov 1 02:04:13.511853 kubelet[2679]: E1101 02:04:13.511830 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4r6nm" podUID="66d2f097-1517-44b9-891a-35d40c5f36ae" Nov 1 02:04:13.425000 audit[7445]: CRED_ACQ pid=7445 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:04:13.607576 kernel: audit: type=1101 audit(1761962653.424:556): pid=7445 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:04:13.607650 kernel: audit: type=1103 audit(1761962653.425:557): pid=7445 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:04:13.607671 kernel: audit: type=1006 audit(1761962653.425:558): pid=7445 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Nov 1 02:04:13.666078 kernel: audit: type=1300 audit(1761962653.425:558): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdc105f3c0 a2=3 a3=0 items=0 ppid=1 pid=7445 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 02:04:13.425000 audit[7445]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdc105f3c0 a2=3 a3=0 items=0 ppid=1 pid=7445 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 02:04:13.425000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 02:04:13.788399 kernel: audit: type=1327 audit(1761962653.425:558): proctitle=737368643A20636F7265205B707269765D Nov 1 02:04:13.788431 kernel: audit: type=1105 audit(1761962653.430:559): pid=7445 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:04:13.430000 audit[7445]: USER_START pid=7445 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:04:13.882721 kernel: audit: type=1103 audit(1761962653.431:560): pid=7448 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:04:13.431000 audit[7448]: CRED_ACQ pid=7448 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:04:13.971827 kernel: audit: type=1106 audit(1761962653.508:561): pid=7445 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:04:13.508000 audit[7445]: USER_END pid=7445 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:04:14.067160 kernel: audit: type=1104 audit(1761962653.508:562): pid=7445 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:04:13.508000 audit[7445]: CRED_DISP pid=7445 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:04:13.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-139.178.90.71:22-147.75.109.163:50722 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:04:15.514321 kubelet[2679]: E1101 02:04:15.514199 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bm44l" podUID="6bd4bd36-d549-4194-a331-51709a095bb2" Nov 1 02:04:15.515483 kubelet[2679]: E1101 02:04:15.514986 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b79df9786-ds9vj" podUID="aabe0a9d-10db-49d2-a1d8-2a8011591b5d" Nov 1 02:04:18.513815 kubelet[2679]: E1101 02:04:18.513673 2679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fbf49c57b-d5g9p" podUID="381a5ea3-a9a9-42e2-8c3a-9c0b410afe13" Nov 1 02:04:18.517266 systemd[1]: Started sshd@26-139.178.90.71:22-147.75.109.163:50734.service. Nov 1 02:04:18.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-139.178.90.71:22-147.75.109.163:50734 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:04:18.557514 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 02:04:18.557604 kernel: audit: type=1130 audit(1761962658.516:564): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-139.178.90.71:22-147.75.109.163:50734 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:04:18.673000 audit[7474]: USER_ACCT pid=7474 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:04:18.673968 sshd[7474]: Accepted publickey for core from 147.75.109.163 port 50734 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 02:04:18.675600 sshd[7474]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 02:04:18.678017 systemd-logind[1668]: New session 28 of user core. Nov 1 02:04:18.678601 systemd[1]: Started session-28.scope. Nov 1 02:04:18.756735 sshd[7474]: pam_unix(sshd:session): session closed for user core Nov 1 02:04:18.758022 systemd[1]: sshd@26-139.178.90.71:22-147.75.109.163:50734.service: Deactivated successfully. Nov 1 02:04:18.758715 systemd[1]: session-28.scope: Deactivated successfully. Nov 1 02:04:18.758734 systemd-logind[1668]: Session 28 logged out. Waiting for processes to exit. Nov 1 02:04:18.759238 systemd-logind[1668]: Removed session 28. Nov 1 02:04:18.674000 audit[7474]: CRED_ACQ pid=7474 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:04:18.855626 kernel: audit: type=1101 audit(1761962658.673:565): pid=7474 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:04:18.855694 kernel: audit: type=1103 audit(1761962658.674:566): pid=7474 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:04:18.855716 kernel: audit: type=1006 audit(1761962658.674:567): pid=7474 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Nov 1 02:04:18.674000 audit[7474]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdb335b0e0 a2=3 a3=0 items=0 ppid=1 pid=7474 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 02:04:19.006149 kernel: audit: type=1300 audit(1761962658.674:567): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdb335b0e0 a2=3 a3=0 items=0 ppid=1 pid=7474 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 02:04:19.006198 kernel: audit: type=1327 audit(1761962658.674:567): proctitle=737368643A20636F7265205B707269765D Nov 1 02:04:18.674000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 02:04:19.036574 kernel: audit: type=1105 audit(1761962658.679:568): pid=7474 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:04:18.679000 audit[7474]: USER_START pid=7474 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:04:19.131010 kernel: audit: type=1103 audit(1761962658.680:569): pid=7477 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:04:18.680000 audit[7477]: CRED_ACQ pid=7477 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:04:18.755000 audit[7474]: USER_END pid=7474 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:04:19.315656 kernel: audit: type=1106 audit(1761962658.755:570): pid=7474 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:04:19.315720 kernel: audit: type=1104 audit(1761962658.755:571): pid=7474 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:04:18.755000 audit[7474]: CRED_DISP pid=7474 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 02:04:18.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-139.178.90.71:22-147.75.109.163:50734 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'