Nov 1 00:42:50.559225 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Oct 31 23:02:53 -00 2025 Nov 1 00:42:50.559237 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 00:42:50.559244 kernel: BIOS-provided physical RAM map: Nov 1 00:42:50.559248 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Nov 1 00:42:50.559252 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Nov 1 00:42:50.559256 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Nov 1 00:42:50.559260 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Nov 1 00:42:50.559264 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Nov 1 00:42:50.559268 kernel: BIOS-e820: [mem 0x0000000040400000-0x000000008253dfff] usable Nov 1 00:42:50.559272 kernel: BIOS-e820: [mem 0x000000008253e000-0x000000008253efff] ACPI NVS Nov 1 00:42:50.559277 kernel: BIOS-e820: [mem 0x000000008253f000-0x000000008253ffff] reserved Nov 1 00:42:50.559280 kernel: BIOS-e820: [mem 0x0000000082540000-0x000000008afcdfff] usable Nov 1 00:42:50.559284 kernel: BIOS-e820: [mem 0x000000008afce000-0x000000008c0b2fff] reserved Nov 1 00:42:50.559288 kernel: BIOS-e820: [mem 0x000000008c0b3000-0x000000008c23bfff] usable Nov 1 00:42:50.559293 kernel: BIOS-e820: [mem 0x000000008c23c000-0x000000008c66dfff] ACPI NVS Nov 1 00:42:50.559298 kernel: BIOS-e820: [mem 0x000000008c66e000-0x000000008eefefff] reserved Nov 1 00:42:50.559302 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Nov 1 00:42:50.559307 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Nov 1 00:42:50.559311 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 1 00:42:50.559315 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Nov 1 00:42:50.559319 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Nov 1 00:42:50.559323 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Nov 1 00:42:50.559328 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Nov 1 00:42:50.559332 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Nov 1 00:42:50.559336 kernel: NX (Execute Disable) protection: active Nov 1 00:42:50.559340 kernel: SMBIOS 3.2.1 present. Nov 1 00:42:50.559345 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 2.6 12/03/2024 Nov 1 00:42:50.559350 kernel: tsc: Detected 3400.000 MHz processor Nov 1 00:42:50.559354 kernel: tsc: Detected 3399.906 MHz TSC Nov 1 00:42:50.559358 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 00:42:50.559363 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 00:42:50.559367 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Nov 1 00:42:50.559371 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 00:42:50.559376 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Nov 1 00:42:50.559380 kernel: Using GB pages for direct mapping Nov 1 00:42:50.559384 kernel: ACPI: Early table checksum verification disabled Nov 1 00:42:50.559389 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Nov 1 00:42:50.559394 kernel: ACPI: XSDT 0x000000008C54F0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Nov 1 00:42:50.559398 kernel: ACPI: FACP 0x000000008C58B670 000114 (v06 01072009 AMI 00010013) Nov 1 00:42:50.559402 kernel: ACPI: DSDT 0x000000008C54F268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Nov 1 00:42:50.559409 kernel: ACPI: FACS 0x000000008C66DF80 000040 Nov 1 00:42:50.559413 kernel: ACPI: APIC 0x000000008C58B788 00012C (v04 01072009 AMI 00010013) Nov 1 00:42:50.559419 kernel: ACPI: FPDT 0x000000008C58B8B8 000044 (v01 01072009 AMI 00010013) Nov 1 00:42:50.559424 kernel: ACPI: FIDT 0x000000008C58B900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Nov 1 00:42:50.559428 kernel: ACPI: MCFG 0x000000008C58B9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Nov 1 00:42:50.559433 kernel: ACPI: SPMI 0x000000008C58B9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Nov 1 00:42:50.559437 kernel: ACPI: SSDT 0x000000008C58BA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Nov 1 00:42:50.559442 kernel: ACPI: SSDT 0x000000008C58D548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Nov 1 00:42:50.559447 kernel: ACPI: SSDT 0x000000008C590710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Nov 1 00:42:50.559451 kernel: ACPI: HPET 0x000000008C592A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 00:42:50.559457 kernel: ACPI: SSDT 0x000000008C592A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Nov 1 00:42:50.559462 kernel: ACPI: SSDT 0x000000008C593A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Nov 1 00:42:50.559466 kernel: ACPI: UEFI 0x000000008C594320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 00:42:50.559471 kernel: ACPI: LPIT 0x000000008C594368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 00:42:50.559476 kernel: ACPI: SSDT 0x000000008C594400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Nov 1 00:42:50.559480 kernel: ACPI: SSDT 0x000000008C596BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Nov 1 00:42:50.559485 kernel: ACPI: DBGP 0x000000008C5980C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 00:42:50.559490 kernel: ACPI: DBG2 0x000000008C598100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Nov 1 00:42:50.559497 kernel: ACPI: SSDT 0x000000008C598158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Nov 1 00:42:50.559502 kernel: ACPI: DMAR 0x000000008C599CC0 000070 (v01 INTEL EDK2 00000002 01000013) Nov 1 00:42:50.559528 kernel: ACPI: SSDT 0x000000008C599D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Nov 1 00:42:50.559533 kernel: ACPI: TPM2 0x000000008C599E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Nov 1 00:42:50.559537 kernel: ACPI: SSDT 0x000000008C599EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Nov 1 00:42:50.559542 kernel: ACPI: WSMT 0x000000008C59AC40 000028 (v01 SUPERM 01072009 AMI 00010013) Nov 1 00:42:50.559547 kernel: ACPI: EINJ 0x000000008C59AC68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Nov 1 00:42:50.559552 kernel: ACPI: ERST 0x000000008C59AD98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Nov 1 00:42:50.559573 kernel: ACPI: BERT 0x000000008C59AFC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Nov 1 00:42:50.559579 kernel: ACPI: HEST 0x000000008C59AFF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Nov 1 00:42:50.559583 kernel: ACPI: SSDT 0x000000008C59B278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Nov 1 00:42:50.559588 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58b670-0x8c58b783] Nov 1 00:42:50.559593 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54f268-0x8c58b66b] Nov 1 00:42:50.559597 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66df80-0x8c66dfbf] Nov 1 00:42:50.559602 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58b788-0x8c58b8b3] Nov 1 00:42:50.559607 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58b8b8-0x8c58b8fb] Nov 1 00:42:50.559611 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58b900-0x8c58b99b] Nov 1 00:42:50.559617 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58b9a0-0x8c58b9db] Nov 1 00:42:50.559621 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58b9e0-0x8c58ba20] Nov 1 00:42:50.559626 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58ba28-0x8c58d543] Nov 1 00:42:50.559631 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58d548-0x8c59070d] Nov 1 00:42:50.559635 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c590710-0x8c592a3a] Nov 1 00:42:50.559640 kernel: ACPI: Reserving HPET table memory at [mem 0x8c592a40-0x8c592a77] Nov 1 00:42:50.559645 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a78-0x8c593a25] Nov 1 00:42:50.559649 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593a28-0x8c59431b] Nov 1 00:42:50.559654 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c594320-0x8c594361] Nov 1 00:42:50.559659 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c594368-0x8c5943fb] Nov 1 00:42:50.559664 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c594400-0x8c596bdd] Nov 1 00:42:50.559669 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c596be0-0x8c5980c1] Nov 1 00:42:50.559673 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5980c8-0x8c5980fb] Nov 1 00:42:50.559678 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c598100-0x8c598153] Nov 1 00:42:50.559682 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598158-0x8c599cbe] Nov 1 00:42:50.559687 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c599cc0-0x8c599d2f] Nov 1 00:42:50.559692 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599d30-0x8c599e73] Nov 1 00:42:50.559696 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c599e78-0x8c599eab] Nov 1 00:42:50.559702 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599eb0-0x8c59ac3e] Nov 1 00:42:50.559706 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c59ac40-0x8c59ac67] Nov 1 00:42:50.559711 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c59ac68-0x8c59ad97] Nov 1 00:42:50.559716 kernel: ACPI: Reserving ERST table memory at [mem 0x8c59ad98-0x8c59afc7] Nov 1 00:42:50.559720 kernel: ACPI: Reserving BERT table memory at [mem 0x8c59afc8-0x8c59aff7] Nov 1 00:42:50.559725 kernel: ACPI: Reserving HEST table memory at [mem 0x8c59aff8-0x8c59b273] Nov 1 00:42:50.559730 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59b278-0x8c59b3d9] Nov 1 00:42:50.559734 kernel: No NUMA configuration found Nov 1 00:42:50.559739 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Nov 1 00:42:50.559744 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Nov 1 00:42:50.559749 kernel: Zone ranges: Nov 1 00:42:50.559754 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 00:42:50.559758 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 1 00:42:50.559763 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Nov 1 00:42:50.559768 kernel: Movable zone start for each node Nov 1 00:42:50.559772 kernel: Early memory node ranges Nov 1 00:42:50.559777 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Nov 1 00:42:50.559782 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Nov 1 00:42:50.559786 kernel: node 0: [mem 0x0000000040400000-0x000000008253dfff] Nov 1 00:42:50.559792 kernel: node 0: [mem 0x0000000082540000-0x000000008afcdfff] Nov 1 00:42:50.559796 kernel: node 0: [mem 0x000000008c0b3000-0x000000008c23bfff] Nov 1 00:42:50.559801 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Nov 1 00:42:50.559805 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Nov 1 00:42:50.559810 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Nov 1 00:42:50.559815 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 00:42:50.559823 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Nov 1 00:42:50.559829 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Nov 1 00:42:50.559834 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Nov 1 00:42:50.559839 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Nov 1 00:42:50.559845 kernel: On node 0, zone DMA32: 11459 pages in unavailable ranges Nov 1 00:42:50.559850 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Nov 1 00:42:50.559855 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Nov 1 00:42:50.559860 kernel: ACPI: PM-Timer IO Port: 0x1808 Nov 1 00:42:50.559865 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Nov 1 00:42:50.559870 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Nov 1 00:42:50.559875 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Nov 1 00:42:50.559881 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Nov 1 00:42:50.559885 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Nov 1 00:42:50.559890 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Nov 1 00:42:50.559895 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Nov 1 00:42:50.559900 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Nov 1 00:42:50.559905 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Nov 1 00:42:50.559910 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Nov 1 00:42:50.559915 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Nov 1 00:42:50.559920 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Nov 1 00:42:50.559926 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Nov 1 00:42:50.559931 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Nov 1 00:42:50.559936 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Nov 1 00:42:50.559941 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Nov 1 00:42:50.559946 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Nov 1 00:42:50.559951 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 1 00:42:50.559956 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 00:42:50.559961 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 00:42:50.559966 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 1 00:42:50.559971 kernel: TSC deadline timer available Nov 1 00:42:50.559976 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Nov 1 00:42:50.559981 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Nov 1 00:42:50.559986 kernel: Booting paravirtualized kernel on bare hardware Nov 1 00:42:50.559992 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 00:42:50.559997 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Nov 1 00:42:50.560002 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Nov 1 00:42:50.560007 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Nov 1 00:42:50.560012 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Nov 1 00:42:50.560017 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232416 Nov 1 00:42:50.560022 kernel: Policy zone: Normal Nov 1 00:42:50.560028 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 00:42:50.560033 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 1 00:42:50.560038 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Nov 1 00:42:50.560043 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Nov 1 00:42:50.560048 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 00:42:50.560054 kernel: Memory: 32722608K/33452984K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47496K init, 4084K bss, 730116K reserved, 0K cma-reserved) Nov 1 00:42:50.560059 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Nov 1 00:42:50.560064 kernel: ftrace: allocating 34614 entries in 136 pages Nov 1 00:42:50.560069 kernel: ftrace: allocated 136 pages with 2 groups Nov 1 00:42:50.560074 kernel: rcu: Hierarchical RCU implementation. Nov 1 00:42:50.560080 kernel: rcu: RCU event tracing is enabled. Nov 1 00:42:50.560085 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Nov 1 00:42:50.560090 kernel: Rude variant of Tasks RCU enabled. Nov 1 00:42:50.560095 kernel: Tracing variant of Tasks RCU enabled. Nov 1 00:42:50.560101 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 00:42:50.560106 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Nov 1 00:42:50.560111 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Nov 1 00:42:50.560116 kernel: random: crng init done Nov 1 00:42:50.560121 kernel: Console: colour dummy device 80x25 Nov 1 00:42:50.560125 kernel: printk: console [tty0] enabled Nov 1 00:42:50.560130 kernel: printk: console [ttyS1] enabled Nov 1 00:42:50.560135 kernel: ACPI: Core revision 20210730 Nov 1 00:42:50.560141 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Nov 1 00:42:50.560146 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 00:42:50.560151 kernel: DMAR: Host address width 39 Nov 1 00:42:50.560156 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Nov 1 00:42:50.560161 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Nov 1 00:42:50.560166 kernel: DMAR: RMRR base: 0x0000008cf19000 end: 0x0000008d162fff Nov 1 00:42:50.560171 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Nov 1 00:42:50.560176 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Nov 1 00:42:50.560181 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Nov 1 00:42:50.560186 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Nov 1 00:42:50.560191 kernel: x2apic enabled Nov 1 00:42:50.560197 kernel: Switched APIC routing to cluster x2apic. Nov 1 00:42:50.560202 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Nov 1 00:42:50.560207 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Nov 1 00:42:50.560213 kernel: CPU0: Thermal monitoring enabled (TM1) Nov 1 00:42:50.560218 kernel: process: using mwait in idle threads Nov 1 00:42:50.560222 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 1 00:42:50.560227 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 1 00:42:50.560232 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 00:42:50.560237 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Nov 1 00:42:50.560243 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Nov 1 00:42:50.560248 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Nov 1 00:42:50.560253 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Nov 1 00:42:50.560258 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Nov 1 00:42:50.560263 kernel: RETBleed: Mitigation: Enhanced IBRS Nov 1 00:42:50.560267 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 00:42:50.560272 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Nov 1 00:42:50.560277 kernel: TAA: Mitigation: TSX disabled Nov 1 00:42:50.560282 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Nov 1 00:42:50.560287 kernel: SRBDS: Mitigation: Microcode Nov 1 00:42:50.560292 kernel: GDS: Mitigation: Microcode Nov 1 00:42:50.560298 kernel: active return thunk: its_return_thunk Nov 1 00:42:50.560302 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 1 00:42:50.560307 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 00:42:50.560312 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 00:42:50.560317 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 00:42:50.560322 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Nov 1 00:42:50.560327 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Nov 1 00:42:50.560332 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 00:42:50.560337 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Nov 1 00:42:50.560342 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Nov 1 00:42:50.560347 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Nov 1 00:42:50.560353 kernel: Freeing SMP alternatives memory: 32K Nov 1 00:42:50.560357 kernel: pid_max: default: 32768 minimum: 301 Nov 1 00:42:50.560362 kernel: LSM: Security Framework initializing Nov 1 00:42:50.560367 kernel: SELinux: Initializing. Nov 1 00:42:50.560372 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 00:42:50.560377 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 00:42:50.560382 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Nov 1 00:42:50.560387 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Nov 1 00:42:50.560392 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Nov 1 00:42:50.560397 kernel: ... version: 4 Nov 1 00:42:50.560402 kernel: ... bit width: 48 Nov 1 00:42:50.560408 kernel: ... generic registers: 4 Nov 1 00:42:50.560413 kernel: ... value mask: 0000ffffffffffff Nov 1 00:42:50.560418 kernel: ... max period: 00007fffffffffff Nov 1 00:42:50.560423 kernel: ... fixed-purpose events: 3 Nov 1 00:42:50.560428 kernel: ... event mask: 000000070000000f Nov 1 00:42:50.560433 kernel: signal: max sigframe size: 2032 Nov 1 00:42:50.560437 kernel: rcu: Hierarchical SRCU implementation. Nov 1 00:42:50.560442 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Nov 1 00:42:50.560447 kernel: smp: Bringing up secondary CPUs ... Nov 1 00:42:50.560453 kernel: x86: Booting SMP configuration: Nov 1 00:42:50.560458 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Nov 1 00:42:50.560463 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 1 00:42:50.560468 kernel: #9 #10 #11 #12 #13 #14 #15 Nov 1 00:42:50.560473 kernel: smp: Brought up 1 node, 16 CPUs Nov 1 00:42:50.560478 kernel: smpboot: Max logical packages: 1 Nov 1 00:42:50.560483 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Nov 1 00:42:50.560488 kernel: devtmpfs: initialized Nov 1 00:42:50.560495 kernel: x86/mm: Memory block size: 128MB Nov 1 00:42:50.560501 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8253e000-0x8253efff] (4096 bytes) Nov 1 00:42:50.560525 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23c000-0x8c66dfff] (4399104 bytes) Nov 1 00:42:50.560530 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 00:42:50.560535 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Nov 1 00:42:50.560540 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 00:42:50.560561 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 00:42:50.560566 kernel: audit: initializing netlink subsys (disabled) Nov 1 00:42:50.560571 kernel: audit: type=2000 audit(1761957765.041:1): state=initialized audit_enabled=0 res=1 Nov 1 00:42:50.560576 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 00:42:50.560582 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 00:42:50.560587 kernel: cpuidle: using governor menu Nov 1 00:42:50.560592 kernel: ACPI: bus type PCI registered Nov 1 00:42:50.560597 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 00:42:50.560602 kernel: dca service started, version 1.12.1 Nov 1 00:42:50.560607 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Nov 1 00:42:50.560612 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Nov 1 00:42:50.560617 kernel: PCI: Using configuration type 1 for base access Nov 1 00:42:50.560622 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Nov 1 00:42:50.560628 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 00:42:50.560633 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 00:42:50.560638 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 00:42:50.560643 kernel: ACPI: Added _OSI(Module Device) Nov 1 00:42:50.560648 kernel: ACPI: Added _OSI(Processor Device) Nov 1 00:42:50.560653 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 00:42:50.560658 kernel: ACPI: Added _OSI(Linux-Dell-Video) Nov 1 00:42:50.560662 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Nov 1 00:42:50.560667 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Nov 1 00:42:50.560673 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Nov 1 00:42:50.560678 kernel: ACPI: Dynamic OEM Table Load: Nov 1 00:42:50.560683 kernel: ACPI: SSDT 0xFFFF88BB0021BA00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Nov 1 00:42:50.560688 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Nov 1 00:42:50.560693 kernel: ACPI: Dynamic OEM Table Load: Nov 1 00:42:50.560698 kernel: ACPI: SSDT 0xFFFF88BB01AE0400 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Nov 1 00:42:50.560703 kernel: ACPI: Dynamic OEM Table Load: Nov 1 00:42:50.560708 kernel: ACPI: SSDT 0xFFFF88BB01A5B800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Nov 1 00:42:50.560713 kernel: ACPI: Dynamic OEM Table Load: Nov 1 00:42:50.560719 kernel: ACPI: SSDT 0xFFFF88BB01B4F800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Nov 1 00:42:50.560724 kernel: ACPI: Dynamic OEM Table Load: Nov 1 00:42:50.560728 kernel: ACPI: SSDT 0xFFFF88BB0014F000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Nov 1 00:42:50.560733 kernel: ACPI: Dynamic OEM Table Load: Nov 1 00:42:50.560738 kernel: ACPI: SSDT 0xFFFF88BB01AE6400 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Nov 1 00:42:50.560743 kernel: ACPI: Interpreter enabled Nov 1 00:42:50.560748 kernel: ACPI: PM: (supports S0 S5) Nov 1 00:42:50.560753 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 00:42:50.560758 kernel: HEST: Enabling Firmware First mode for corrected errors. Nov 1 00:42:50.560763 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Nov 1 00:42:50.560769 kernel: HEST: Table parsing has been initialized. Nov 1 00:42:50.560774 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Nov 1 00:42:50.560779 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 00:42:50.560784 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Nov 1 00:42:50.560789 kernel: ACPI: PM: Power Resource [USBC] Nov 1 00:42:50.560794 kernel: ACPI: PM: Power Resource [V0PR] Nov 1 00:42:50.560799 kernel: ACPI: PM: Power Resource [V1PR] Nov 1 00:42:50.560804 kernel: ACPI: PM: Power Resource [V2PR] Nov 1 00:42:50.560808 kernel: ACPI: PM: Power Resource [WRST] Nov 1 00:42:50.560814 kernel: ACPI: PM: Power Resource [FN00] Nov 1 00:42:50.560819 kernel: ACPI: PM: Power Resource [FN01] Nov 1 00:42:50.560824 kernel: ACPI: PM: Power Resource [FN02] Nov 1 00:42:50.560829 kernel: ACPI: PM: Power Resource [FN03] Nov 1 00:42:50.560834 kernel: ACPI: PM: Power Resource [FN04] Nov 1 00:42:50.560839 kernel: ACPI: PM: Power Resource [PIN] Nov 1 00:42:50.560844 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Nov 1 00:42:50.560910 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 1 00:42:50.560959 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Nov 1 00:42:50.561002 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Nov 1 00:42:50.561010 kernel: PCI host bridge to bus 0000:00 Nov 1 00:42:50.561055 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 00:42:50.561094 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 00:42:50.561133 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 00:42:50.561170 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Nov 1 00:42:50.561210 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Nov 1 00:42:50.561249 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Nov 1 00:42:50.561300 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Nov 1 00:42:50.561352 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Nov 1 00:42:50.561398 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Nov 1 00:42:50.561446 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Nov 1 00:42:50.561493 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Nov 1 00:42:50.561565 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Nov 1 00:42:50.561611 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Nov 1 00:42:50.561659 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Nov 1 00:42:50.561705 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Nov 1 00:42:50.561751 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Nov 1 00:42:50.561800 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Nov 1 00:42:50.561846 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Nov 1 00:42:50.561890 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Nov 1 00:42:50.561939 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Nov 1 00:42:50.561984 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 1 00:42:50.562034 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Nov 1 00:42:50.562081 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 1 00:42:50.562128 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Nov 1 00:42:50.562172 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Nov 1 00:42:50.562216 kernel: pci 0000:00:16.0: PME# supported from D3hot Nov 1 00:42:50.562264 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Nov 1 00:42:50.562308 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Nov 1 00:42:50.562352 kernel: pci 0000:00:16.1: PME# supported from D3hot Nov 1 00:42:50.562401 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Nov 1 00:42:50.562447 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Nov 1 00:42:50.562490 kernel: pci 0000:00:16.4: PME# supported from D3hot Nov 1 00:42:50.562540 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Nov 1 00:42:50.562585 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Nov 1 00:42:50.562631 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Nov 1 00:42:50.562683 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Nov 1 00:42:50.562730 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Nov 1 00:42:50.562774 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Nov 1 00:42:50.562818 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Nov 1 00:42:50.562862 kernel: pci 0000:00:17.0: PME# supported from D3hot Nov 1 00:42:50.562911 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Nov 1 00:42:50.562955 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Nov 1 00:42:50.563005 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Nov 1 00:42:50.563052 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Nov 1 00:42:50.563103 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Nov 1 00:42:50.563148 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Nov 1 00:42:50.563197 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Nov 1 00:42:50.563241 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Nov 1 00:42:50.563291 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Nov 1 00:42:50.563337 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Nov 1 00:42:50.563386 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Nov 1 00:42:50.563432 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 1 00:42:50.563482 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Nov 1 00:42:50.563533 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Nov 1 00:42:50.563578 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Nov 1 00:42:50.563623 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Nov 1 00:42:50.563673 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Nov 1 00:42:50.563717 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Nov 1 00:42:50.563771 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Nov 1 00:42:50.563818 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Nov 1 00:42:50.563864 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Nov 1 00:42:50.563911 kernel: pci 0000:01:00.0: PME# supported from D3cold Nov 1 00:42:50.563958 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Nov 1 00:42:50.564004 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Nov 1 00:42:50.564055 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Nov 1 00:42:50.564104 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Nov 1 00:42:50.564181 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Nov 1 00:42:50.564262 kernel: pci 0000:01:00.1: PME# supported from D3cold Nov 1 00:42:50.564309 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Nov 1 00:42:50.564355 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Nov 1 00:42:50.564400 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 1 00:42:50.564446 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Nov 1 00:42:50.564495 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Nov 1 00:42:50.564541 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Nov 1 00:42:50.564591 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Nov 1 00:42:50.564639 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Nov 1 00:42:50.564685 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Nov 1 00:42:50.564731 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Nov 1 00:42:50.564777 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Nov 1 00:42:50.564823 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Nov 1 00:42:50.564871 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Nov 1 00:42:50.564915 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Nov 1 00:42:50.564961 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Nov 1 00:42:50.565010 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Nov 1 00:42:50.565057 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Nov 1 00:42:50.565104 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Nov 1 00:42:50.565150 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Nov 1 00:42:50.565199 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Nov 1 00:42:50.565245 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Nov 1 00:42:50.565291 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Nov 1 00:42:50.565335 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Nov 1 00:42:50.565380 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Nov 1 00:42:50.565425 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Nov 1 00:42:50.565476 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Nov 1 00:42:50.565526 kernel: pci 0000:06:00.0: enabling Extended Tags Nov 1 00:42:50.565575 kernel: pci 0000:06:00.0: supports D1 D2 Nov 1 00:42:50.565622 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 1 00:42:50.565667 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Nov 1 00:42:50.565712 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Nov 1 00:42:50.565757 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Nov 1 00:42:50.565808 kernel: pci_bus 0000:07: extended config space not accessible Nov 1 00:42:50.565862 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Nov 1 00:42:50.565913 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Nov 1 00:42:50.565962 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Nov 1 00:42:50.566011 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Nov 1 00:42:50.566060 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 00:42:50.566107 kernel: pci 0000:07:00.0: supports D1 D2 Nov 1 00:42:50.566157 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 1 00:42:50.566203 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Nov 1 00:42:50.566252 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Nov 1 00:42:50.566299 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Nov 1 00:42:50.566306 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Nov 1 00:42:50.566312 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Nov 1 00:42:50.566318 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Nov 1 00:42:50.566323 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Nov 1 00:42:50.566330 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Nov 1 00:42:50.566335 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Nov 1 00:42:50.566341 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Nov 1 00:42:50.566347 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Nov 1 00:42:50.566353 kernel: iommu: Default domain type: Translated Nov 1 00:42:50.566358 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 00:42:50.566405 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Nov 1 00:42:50.566454 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 00:42:50.566506 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Nov 1 00:42:50.566534 kernel: vgaarb: loaded Nov 1 00:42:50.566539 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 1 00:42:50.566545 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 1 00:42:50.566551 kernel: PTP clock support registered Nov 1 00:42:50.566557 kernel: PCI: Using ACPI for IRQ routing Nov 1 00:42:50.566562 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 00:42:50.566567 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Nov 1 00:42:50.566572 kernel: e820: reserve RAM buffer [mem 0x8253e000-0x83ffffff] Nov 1 00:42:50.566579 kernel: e820: reserve RAM buffer [mem 0x8afce000-0x8bffffff] Nov 1 00:42:50.566584 kernel: e820: reserve RAM buffer [mem 0x8c23c000-0x8fffffff] Nov 1 00:42:50.566589 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Nov 1 00:42:50.566594 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Nov 1 00:42:50.566600 kernel: clocksource: Switched to clocksource tsc-early Nov 1 00:42:50.566606 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 00:42:50.566611 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 00:42:50.566616 kernel: pnp: PnP ACPI init Nov 1 00:42:50.566662 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Nov 1 00:42:50.566707 kernel: pnp 00:02: [dma 0 disabled] Nov 1 00:42:50.566751 kernel: pnp 00:03: [dma 0 disabled] Nov 1 00:42:50.566799 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Nov 1 00:42:50.566840 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Nov 1 00:42:50.566882 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Nov 1 00:42:50.566926 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Nov 1 00:42:50.566965 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Nov 1 00:42:50.567004 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Nov 1 00:42:50.567046 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Nov 1 00:42:50.567085 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Nov 1 00:42:50.567125 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Nov 1 00:42:50.567164 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Nov 1 00:42:50.567204 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Nov 1 00:42:50.567247 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Nov 1 00:42:50.567288 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Nov 1 00:42:50.567330 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Nov 1 00:42:50.567369 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Nov 1 00:42:50.567409 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Nov 1 00:42:50.567448 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Nov 1 00:42:50.567488 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Nov 1 00:42:50.567554 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Nov 1 00:42:50.567562 kernel: pnp: PnP ACPI: found 10 devices Nov 1 00:42:50.567569 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 00:42:50.567575 kernel: NET: Registered PF_INET protocol family Nov 1 00:42:50.567580 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 00:42:50.567586 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Nov 1 00:42:50.567591 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 00:42:50.567597 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 00:42:50.567602 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Nov 1 00:42:50.567608 kernel: TCP: Hash tables configured (established 262144 bind 65536) Nov 1 00:42:50.567613 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 1 00:42:50.567619 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 1 00:42:50.567625 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 00:42:50.567630 kernel: NET: Registered PF_XDP protocol family Nov 1 00:42:50.567676 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Nov 1 00:42:50.567721 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Nov 1 00:42:50.567765 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Nov 1 00:42:50.567813 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Nov 1 00:42:50.567858 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Nov 1 00:42:50.567907 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Nov 1 00:42:50.567954 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Nov 1 00:42:50.568002 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 1 00:42:50.568047 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Nov 1 00:42:50.568092 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Nov 1 00:42:50.568137 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Nov 1 00:42:50.568184 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Nov 1 00:42:50.568229 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Nov 1 00:42:50.568273 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Nov 1 00:42:50.568318 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Nov 1 00:42:50.568363 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Nov 1 00:42:50.568408 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Nov 1 00:42:50.568453 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Nov 1 00:42:50.568504 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Nov 1 00:42:50.568551 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Nov 1 00:42:50.568598 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Nov 1 00:42:50.568644 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Nov 1 00:42:50.568688 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Nov 1 00:42:50.568734 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Nov 1 00:42:50.568774 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Nov 1 00:42:50.568815 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 00:42:50.568854 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 00:42:50.568894 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 00:42:50.568934 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Nov 1 00:42:50.568973 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Nov 1 00:42:50.569020 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Nov 1 00:42:50.569062 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Nov 1 00:42:50.569110 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Nov 1 00:42:50.569153 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Nov 1 00:42:50.569199 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Nov 1 00:42:50.569241 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Nov 1 00:42:50.569286 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Nov 1 00:42:50.569329 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Nov 1 00:42:50.569372 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Nov 1 00:42:50.569416 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Nov 1 00:42:50.569425 kernel: PCI: CLS 64 bytes, default 64 Nov 1 00:42:50.569431 kernel: DMAR: No ATSR found Nov 1 00:42:50.569437 kernel: DMAR: No SATC found Nov 1 00:42:50.569442 kernel: DMAR: dmar0: Using Queued invalidation Nov 1 00:42:50.569487 kernel: pci 0000:00:00.0: Adding to iommu group 0 Nov 1 00:42:50.569535 kernel: pci 0000:00:01.0: Adding to iommu group 1 Nov 1 00:42:50.569581 kernel: pci 0000:00:08.0: Adding to iommu group 2 Nov 1 00:42:50.569624 kernel: pci 0000:00:12.0: Adding to iommu group 3 Nov 1 00:42:50.569672 kernel: pci 0000:00:14.0: Adding to iommu group 4 Nov 1 00:42:50.569716 kernel: pci 0000:00:14.2: Adding to iommu group 4 Nov 1 00:42:50.569760 kernel: pci 0000:00:15.0: Adding to iommu group 5 Nov 1 00:42:50.569804 kernel: pci 0000:00:15.1: Adding to iommu group 5 Nov 1 00:42:50.569849 kernel: pci 0000:00:16.0: Adding to iommu group 6 Nov 1 00:42:50.569893 kernel: pci 0000:00:16.1: Adding to iommu group 6 Nov 1 00:42:50.569937 kernel: pci 0000:00:16.4: Adding to iommu group 6 Nov 1 00:42:50.569981 kernel: pci 0000:00:17.0: Adding to iommu group 7 Nov 1 00:42:50.570024 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Nov 1 00:42:50.570072 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Nov 1 00:42:50.570116 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Nov 1 00:42:50.570162 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Nov 1 00:42:50.570207 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Nov 1 00:42:50.570252 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Nov 1 00:42:50.570296 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Nov 1 00:42:50.570341 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Nov 1 00:42:50.570386 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Nov 1 00:42:50.570434 kernel: pci 0000:01:00.0: Adding to iommu group 1 Nov 1 00:42:50.570482 kernel: pci 0000:01:00.1: Adding to iommu group 1 Nov 1 00:42:50.570531 kernel: pci 0000:03:00.0: Adding to iommu group 15 Nov 1 00:42:50.570580 kernel: pci 0000:04:00.0: Adding to iommu group 16 Nov 1 00:42:50.570626 kernel: pci 0000:06:00.0: Adding to iommu group 17 Nov 1 00:42:50.570675 kernel: pci 0000:07:00.0: Adding to iommu group 17 Nov 1 00:42:50.570682 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Nov 1 00:42:50.570688 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 1 00:42:50.570695 kernel: software IO TLB: mapped [mem 0x0000000086fce000-0x000000008afce000] (64MB) Nov 1 00:42:50.570701 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Nov 1 00:42:50.570706 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Nov 1 00:42:50.570712 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Nov 1 00:42:50.570717 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Nov 1 00:42:50.570781 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Nov 1 00:42:50.570789 kernel: Initialise system trusted keyrings Nov 1 00:42:50.570795 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Nov 1 00:42:50.570801 kernel: Key type asymmetric registered Nov 1 00:42:50.570807 kernel: Asymmetric key parser 'x509' registered Nov 1 00:42:50.570812 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 1 00:42:50.570817 kernel: io scheduler mq-deadline registered Nov 1 00:42:50.570823 kernel: io scheduler kyber registered Nov 1 00:42:50.570828 kernel: io scheduler bfq registered Nov 1 00:42:50.570873 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Nov 1 00:42:50.570917 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Nov 1 00:42:50.570962 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Nov 1 00:42:50.571009 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Nov 1 00:42:50.571052 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Nov 1 00:42:50.571098 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Nov 1 00:42:50.571147 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Nov 1 00:42:50.571154 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Nov 1 00:42:50.571160 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Nov 1 00:42:50.571165 kernel: pstore: Registered erst as persistent store backend Nov 1 00:42:50.571171 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 00:42:50.571177 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 00:42:50.571183 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 00:42:50.571188 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 1 00:42:50.571194 kernel: hpet_acpi_add: no address or irqs in _CRS Nov 1 00:42:50.571240 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Nov 1 00:42:50.571248 kernel: i8042: PNP: No PS/2 controller found. Nov 1 00:42:50.571287 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Nov 1 00:42:50.571329 kernel: rtc_cmos rtc_cmos: registered as rtc0 Nov 1 00:42:50.571371 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-11-01T00:42:49 UTC (1761957769) Nov 1 00:42:50.571411 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Nov 1 00:42:50.571419 kernel: intel_pstate: Intel P-state driver initializing Nov 1 00:42:50.571424 kernel: intel_pstate: Disabling energy efficiency optimization Nov 1 00:42:50.571430 kernel: intel_pstate: HWP enabled Nov 1 00:42:50.571435 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Nov 1 00:42:50.571440 kernel: vesafb: scrolling: redraw Nov 1 00:42:50.571446 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Nov 1 00:42:50.571452 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x0000000077cc6674, using 768k, total 768k Nov 1 00:42:50.571458 kernel: Console: switching to colour frame buffer device 128x48 Nov 1 00:42:50.571463 kernel: fb0: VESA VGA frame buffer device Nov 1 00:42:50.571468 kernel: NET: Registered PF_INET6 protocol family Nov 1 00:42:50.571474 kernel: Segment Routing with IPv6 Nov 1 00:42:50.571479 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 00:42:50.571484 kernel: NET: Registered PF_PACKET protocol family Nov 1 00:42:50.571490 kernel: Key type dns_resolver registered Nov 1 00:42:50.571519 kernel: microcode: sig=0x906ed, pf=0x2, revision=0x102 Nov 1 00:42:50.571525 kernel: microcode: Microcode Update Driver: v2.2. Nov 1 00:42:50.571531 kernel: IPI shorthand broadcast: enabled Nov 1 00:42:50.571536 kernel: sched_clock: Marking stable (1688083189, 1339962391)->(4479884730, -1451839150) Nov 1 00:42:50.571542 kernel: registered taskstats version 1 Nov 1 00:42:50.571567 kernel: Loading compiled-in X.509 certificates Nov 1 00:42:50.571572 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: f2055682e6899ad8548fd369019e7b47939b46a0' Nov 1 00:42:50.571577 kernel: Key type .fscrypt registered Nov 1 00:42:50.571582 kernel: Key type fscrypt-provisioning registered Nov 1 00:42:50.571588 kernel: pstore: Using crash dump compression: deflate Nov 1 00:42:50.571594 kernel: ima: Allocated hash algorithm: sha1 Nov 1 00:42:50.571599 kernel: ima: No architecture policies found Nov 1 00:42:50.571604 kernel: clk: Disabling unused clocks Nov 1 00:42:50.571610 kernel: Freeing unused kernel image (initmem) memory: 47496K Nov 1 00:42:50.571615 kernel: Write protecting the kernel read-only data: 28672k Nov 1 00:42:50.571620 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Nov 1 00:42:50.571626 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Nov 1 00:42:50.571631 kernel: Run /init as init process Nov 1 00:42:50.571636 kernel: with arguments: Nov 1 00:42:50.571642 kernel: /init Nov 1 00:42:50.571648 kernel: with environment: Nov 1 00:42:50.571653 kernel: HOME=/ Nov 1 00:42:50.571658 kernel: TERM=linux Nov 1 00:42:50.571663 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 1 00:42:50.571670 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 00:42:50.571676 systemd[1]: Detected architecture x86-64. Nov 1 00:42:50.571682 systemd[1]: Running in initrd. Nov 1 00:42:50.571688 systemd[1]: No hostname configured, using default hostname. Nov 1 00:42:50.571694 systemd[1]: Hostname set to . Nov 1 00:42:50.571699 systemd[1]: Initializing machine ID from random generator. Nov 1 00:42:50.571705 systemd[1]: Queued start job for default target initrd.target. Nov 1 00:42:50.571710 systemd[1]: Started systemd-ask-password-console.path. Nov 1 00:42:50.571716 systemd[1]: Reached target cryptsetup.target. Nov 1 00:42:50.571721 systemd[1]: Reached target paths.target. Nov 1 00:42:50.571726 systemd[1]: Reached target slices.target. Nov 1 00:42:50.571733 systemd[1]: Reached target swap.target. Nov 1 00:42:50.571738 systemd[1]: Reached target timers.target. Nov 1 00:42:50.571743 systemd[1]: Listening on iscsid.socket. Nov 1 00:42:50.571749 systemd[1]: Listening on iscsiuio.socket. Nov 1 00:42:50.571755 systemd[1]: Listening on systemd-journald-audit.socket. Nov 1 00:42:50.571760 systemd[1]: Listening on systemd-journald-dev-log.socket. Nov 1 00:42:50.571766 systemd[1]: Listening on systemd-journald.socket. Nov 1 00:42:50.571772 systemd[1]: Listening on systemd-networkd.socket. Nov 1 00:42:50.571778 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 00:42:50.571783 kernel: tsc: Refined TSC clocksource calibration: 3408.047 MHz Nov 1 00:42:50.571788 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x31200094248, max_idle_ns: 440795318142 ns Nov 1 00:42:50.571794 kernel: clocksource: Switched to clocksource tsc Nov 1 00:42:50.571799 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 00:42:50.571805 systemd[1]: Reached target sockets.target. Nov 1 00:42:50.571810 systemd[1]: Starting kmod-static-nodes.service... Nov 1 00:42:50.571816 systemd[1]: Finished network-cleanup.service. Nov 1 00:42:50.571822 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 00:42:50.571828 systemd[1]: Starting systemd-journald.service... Nov 1 00:42:50.571833 systemd[1]: Starting systemd-modules-load.service... Nov 1 00:42:50.571841 systemd-journald[268]: Journal started Nov 1 00:42:50.571868 systemd-journald[268]: Runtime Journal (/run/log/journal/122c9e396f7443feba0b1e3139384a2d) is 8.0M, max 640.1M, 632.1M free. Nov 1 00:42:50.573618 systemd-modules-load[269]: Inserted module 'overlay' Nov 1 00:42:50.578000 audit: BPF prog-id=6 op=LOAD Nov 1 00:42:50.596552 kernel: audit: type=1334 audit(1761957770.578:2): prog-id=6 op=LOAD Nov 1 00:42:50.596568 systemd[1]: Starting systemd-resolved.service... Nov 1 00:42:50.646499 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 00:42:50.646514 systemd[1]: Starting systemd-vconsole-setup.service... Nov 1 00:42:50.679531 kernel: Bridge firewalling registered Nov 1 00:42:50.679548 systemd[1]: Started systemd-journald.service. Nov 1 00:42:50.694008 systemd-modules-load[269]: Inserted module 'br_netfilter' Nov 1 00:42:50.742361 kernel: audit: type=1130 audit(1761957770.702:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:50.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:50.696762 systemd-resolved[271]: Positive Trust Anchors: Nov 1 00:42:50.800586 kernel: SCSI subsystem initialized Nov 1 00:42:50.800597 kernel: audit: type=1130 audit(1761957770.754:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:50.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:50.696768 systemd-resolved[271]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:42:50.934706 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 00:42:50.934719 kernel: audit: type=1130 audit(1761957770.826:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:50.934727 kernel: device-mapper: uevent: version 1.0.3 Nov 1 00:42:50.934808 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Nov 1 00:42:50.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:50.696790 systemd-resolved[271]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 00:42:51.008745 kernel: audit: type=1130 audit(1761957770.943:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:50.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:50.698414 systemd-resolved[271]: Defaulting to hostname 'linux'. Nov 1 00:42:51.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:50.702721 systemd[1]: Started systemd-resolved.service. Nov 1 00:42:51.117277 kernel: audit: type=1130 audit(1761957771.017:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:51.117288 kernel: audit: type=1130 audit(1761957771.070:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:51.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:50.754672 systemd[1]: Finished kmod-static-nodes.service. Nov 1 00:42:50.826659 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 00:42:50.924959 systemd-modules-load[269]: Inserted module 'dm_multipath' Nov 1 00:42:50.943798 systemd[1]: Finished systemd-modules-load.service. Nov 1 00:42:51.017848 systemd[1]: Finished systemd-vconsole-setup.service. Nov 1 00:42:51.070789 systemd[1]: Reached target nss-lookup.target. Nov 1 00:42:51.126113 systemd[1]: Starting dracut-cmdline-ask.service... Nov 1 00:42:51.146048 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:42:51.146345 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Nov 1 00:42:51.149328 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Nov 1 00:42:51.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:51.150132 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:42:51.198593 kernel: audit: type=1130 audit(1761957771.148:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:51.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:51.211845 systemd[1]: Finished dracut-cmdline-ask.service. Nov 1 00:42:51.277594 kernel: audit: type=1130 audit(1761957771.211:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:51.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:51.270194 systemd[1]: Starting dracut-cmdline.service... Nov 1 00:42:51.291591 dracut-cmdline[294]: dracut-dracut-053 Nov 1 00:42:51.291591 dracut-cmdline[294]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Nov 1 00:42:51.291591 dracut-cmdline[294]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 00:42:51.362576 kernel: Loading iSCSI transport class v2.0-870. Nov 1 00:42:51.362589 kernel: iscsi: registered transport (tcp) Nov 1 00:42:51.421126 kernel: iscsi: registered transport (qla4xxx) Nov 1 00:42:51.421143 kernel: QLogic iSCSI HBA Driver Nov 1 00:42:51.437207 systemd[1]: Finished dracut-cmdline.service. Nov 1 00:42:51.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:51.447221 systemd[1]: Starting dracut-pre-udev.service... Nov 1 00:42:51.503568 kernel: raid6: avx2x4 gen() 48637 MB/s Nov 1 00:42:51.538530 kernel: raid6: avx2x4 xor() 22105 MB/s Nov 1 00:42:51.573529 kernel: raid6: avx2x2 gen() 53497 MB/s Nov 1 00:42:51.608567 kernel: raid6: avx2x2 xor() 32031 MB/s Nov 1 00:42:51.643529 kernel: raid6: avx2x1 gen() 45033 MB/s Nov 1 00:42:51.678573 kernel: raid6: avx2x1 xor() 27843 MB/s Nov 1 00:42:51.713568 kernel: raid6: sse2x4 gen() 21312 MB/s Nov 1 00:42:51.747529 kernel: raid6: sse2x4 xor() 11975 MB/s Nov 1 00:42:51.781529 kernel: raid6: sse2x2 gen() 21651 MB/s Nov 1 00:42:51.815568 kernel: raid6: sse2x2 xor() 13409 MB/s Nov 1 00:42:51.849568 kernel: raid6: sse2x1 gen() 18266 MB/s Nov 1 00:42:51.901465 kernel: raid6: sse2x1 xor() 8912 MB/s Nov 1 00:42:51.901481 kernel: raid6: using algorithm avx2x2 gen() 53497 MB/s Nov 1 00:42:51.901489 kernel: raid6: .... xor() 32031 MB/s, rmw enabled Nov 1 00:42:51.919685 kernel: raid6: using avx2x2 recovery algorithm Nov 1 00:42:51.966529 kernel: xor: automatically using best checksumming function avx Nov 1 00:42:52.047528 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Nov 1 00:42:52.052146 systemd[1]: Finished dracut-pre-udev.service. Nov 1 00:42:52.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:52.060000 audit: BPF prog-id=7 op=LOAD Nov 1 00:42:52.060000 audit: BPF prog-id=8 op=LOAD Nov 1 00:42:52.061407 systemd[1]: Starting systemd-udevd.service... Nov 1 00:42:52.069425 systemd-udevd[473]: Using default interface naming scheme 'v252'. Nov 1 00:42:52.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:52.074579 systemd[1]: Started systemd-udevd.service. Nov 1 00:42:52.112580 dracut-pre-trigger[485]: rd.md=0: removing MD RAID activation Nov 1 00:42:52.088104 systemd[1]: Starting dracut-pre-trigger.service... Nov 1 00:42:52.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:52.114433 systemd[1]: Finished dracut-pre-trigger.service. Nov 1 00:42:52.130432 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 00:42:52.185475 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 00:42:52.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:52.212535 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 00:42:52.215506 kernel: libata version 3.00 loaded. Nov 1 00:42:52.250503 kernel: ACPI: bus type USB registered Nov 1 00:42:52.250539 kernel: usbcore: registered new interface driver usbfs Nov 1 00:42:52.250554 kernel: usbcore: registered new interface driver hub Nov 1 00:42:52.268345 kernel: usbcore: registered new device driver usb Nov 1 00:42:52.292509 kernel: ahci 0000:00:17.0: version 3.0 Nov 1 00:42:52.590119 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 00:42:52.590135 kernel: mlx5_core 0000:01:00.0: firmware version: 14.31.1014 Nov 1 00:42:52.968507 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Nov 1 00:42:52.968616 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Nov 1 00:42:52.968684 kernel: scsi host0: ahci Nov 1 00:42:52.968758 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Nov 1 00:42:52.968825 kernel: scsi host1: ahci Nov 1 00:42:52.968892 kernel: scsi host2: ahci Nov 1 00:42:52.968963 kernel: scsi host3: ahci Nov 1 00:42:52.969027 kernel: scsi host4: ahci Nov 1 00:42:52.969093 kernel: scsi host5: ahci Nov 1 00:42:52.969159 kernel: scsi host6: ahci Nov 1 00:42:52.969224 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Nov 1 00:42:52.969233 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Nov 1 00:42:52.969242 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Nov 1 00:42:52.969250 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Nov 1 00:42:52.969258 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Nov 1 00:42:52.969267 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Nov 1 00:42:52.969276 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Nov 1 00:42:52.969285 kernel: AES CTR mode by8 optimization enabled Nov 1 00:42:52.969293 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Nov 1 00:42:52.969301 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Nov 1 00:42:52.969309 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Nov 1 00:42:52.969374 kernel: igb 0000:03:00.0: added PHC on eth0 Nov 1 00:42:52.969440 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Nov 1 00:42:52.969527 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Nov 1 00:42:52.969615 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:32:0c Nov 1 00:42:52.969679 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Nov 1 00:42:52.969742 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Nov 1 00:42:52.969804 kernel: igb 0000:04:00.0: added PHC on eth1 Nov 1 00:42:52.969869 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Nov 1 00:42:52.969930 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:32:0d Nov 1 00:42:52.969992 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Nov 1 00:42:52.970054 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Nov 1 00:42:52.970119 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 1 00:42:52.970128 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Nov 1 00:42:52.970136 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 1 00:42:52.970145 kernel: mlx5_core 0000:01:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Nov 1 00:42:52.970207 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 1 00:42:52.970216 kernel: mlx5_core 0000:01:00.1: firmware version: 14.31.1014 Nov 1 00:42:53.807401 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Nov 1 00:42:53.807412 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Nov 1 00:42:53.807481 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 1 00:42:53.807489 kernel: ata7: SATA link down (SStatus 0 SControl 300) Nov 1 00:42:53.807499 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Nov 1 00:42:53.807506 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Nov 1 00:42:53.807513 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Nov 1 00:42:53.807520 kernel: ata2.00: Features: NCQ-prio Nov 1 00:42:53.807526 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Nov 1 00:42:53.807533 kernel: ata1.00: Features: NCQ-prio Nov 1 00:42:53.807539 kernel: ata2.00: configured for UDMA/133 Nov 1 00:42:53.807547 kernel: ata1.00: configured for UDMA/133 Nov 1 00:42:53.807554 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Nov 1 00:42:53.812856 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Nov 1 00:42:53.812923 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Nov 1 00:42:53.812979 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Nov 1 00:42:53.813031 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Nov 1 00:42:53.813082 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Nov 1 00:42:53.813133 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Nov 1 00:42:53.813184 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Nov 1 00:42:53.813233 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Nov 1 00:42:53.813283 kernel: hub 1-0:1.0: USB hub found Nov 1 00:42:53.813342 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Nov 1 00:42:53.813395 kernel: hub 1-0:1.0: 16 ports detected Nov 1 00:42:53.813450 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Nov 1 00:42:53.813508 kernel: port_module: 9 callbacks suppressed Nov 1 00:42:53.813516 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Nov 1 00:42:53.813569 kernel: hub 2-0:1.0: USB hub found Nov 1 00:42:53.813629 kernel: hub 2-0:1.0: 10 ports detected Nov 1 00:42:53.813684 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 00:42:53.813692 kernel: ata2.00: Enabling discard_zeroes_data Nov 1 00:42:53.813698 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Nov 1 00:42:53.813756 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Nov 1 00:42:53.813813 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Nov 1 00:42:53.813870 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Nov 1 00:42:53.813926 kernel: sd 0:0:0:0: [sdb] Write Protect is off Nov 1 00:42:53.813982 kernel: sd 1:0:0:0: [sda] Write Protect is off Nov 1 00:42:53.814036 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Nov 1 00:42:53.814092 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 1 00:42:53.814148 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Nov 1 00:42:53.814202 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Nov 1 00:42:53.814258 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 00:42:53.814266 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 1 00:42:53.814321 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 00:42:53.814329 kernel: ata2.00: Enabling discard_zeroes_data Nov 1 00:42:53.814335 kernel: GPT:9289727 != 937703087 Nov 1 00:42:53.814341 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Nov 1 00:42:53.839518 kernel: ata2.00: Enabling discard_zeroes_data Nov 1 00:42:53.839529 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Nov 1 00:42:53.839600 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 00:42:53.839608 kernel: GPT:9289727 != 937703087 Nov 1 00:42:53.839615 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 00:42:53.839621 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 1 00:42:53.839628 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 00:42:53.839634 kernel: mlx5_core 0000:01:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Nov 1 00:42:53.839699 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Nov 1 00:42:53.839766 kernel: hub 1-14:1.0: USB hub found Nov 1 00:42:53.839830 kernel: hub 1-14:1.0: 4 ports detected Nov 1 00:42:53.839898 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth0 Nov 1 00:42:53.854500 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sdb6 scanned by (udev-worker) (642) Nov 1 00:42:53.858167 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Nov 1 00:42:53.906764 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth2 Nov 1 00:42:53.881603 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Nov 1 00:42:53.884506 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Nov 1 00:42:53.922983 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Nov 1 00:42:53.934577 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 00:42:53.966113 systemd[1]: Starting disk-uuid.service... Nov 1 00:42:54.015602 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 00:42:54.015622 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 1 00:42:54.015632 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 00:42:54.015710 disk-uuid[690]: Primary Header is updated. Nov 1 00:42:54.015710 disk-uuid[690]: Secondary Entries is updated. Nov 1 00:42:54.015710 disk-uuid[690]: Secondary Header is updated. Nov 1 00:42:54.052578 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 1 00:42:54.140514 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Nov 1 00:42:54.277567 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 1 00:42:54.310507 kernel: usbcore: registered new interface driver usbhid Nov 1 00:42:54.310550 kernel: usbhid: USB HID core driver Nov 1 00:42:54.344498 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Nov 1 00:42:54.472975 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Nov 1 00:42:54.473110 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Nov 1 00:42:54.473119 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Nov 1 00:42:55.022934 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 00:42:55.042184 disk-uuid[691]: The operation has completed successfully. Nov 1 00:42:55.050624 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 1 00:42:55.081602 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 00:42:55.177319 kernel: audit: type=1130 audit(1761957775.088:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:55.177334 kernel: audit: type=1131 audit(1761957775.088:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:55.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:55.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:55.081651 systemd[1]: Finished disk-uuid.service. Nov 1 00:42:55.206592 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 1 00:42:55.092035 systemd[1]: Starting verity-setup.service... Nov 1 00:42:55.236436 systemd[1]: Found device dev-mapper-usr.device. Nov 1 00:42:55.245640 systemd[1]: Mounting sysusr-usr.mount... Nov 1 00:42:55.259831 systemd[1]: Finished verity-setup.service. Nov 1 00:42:55.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:55.323502 kernel: audit: type=1130 audit(1761957775.275:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:55.354547 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Nov 1 00:42:55.354753 systemd[1]: Mounted sysusr-usr.mount. Nov 1 00:42:55.361795 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Nov 1 00:42:55.362190 systemd[1]: Starting ignition-setup.service... Nov 1 00:42:55.452129 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:42:55.452144 kernel: BTRFS info (device sdb6): using free space tree Nov 1 00:42:55.452152 kernel: BTRFS info (device sdb6): has skinny extents Nov 1 00:42:55.452159 kernel: BTRFS info (device sdb6): enabling ssd optimizations Nov 1 00:42:55.400218 systemd[1]: Starting parse-ip-for-networkd.service... Nov 1 00:42:55.461024 systemd[1]: Finished ignition-setup.service. Nov 1 00:42:55.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:55.478927 systemd[1]: Finished parse-ip-for-networkd.service. Nov 1 00:42:55.587499 kernel: audit: type=1130 audit(1761957775.478:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:55.587516 kernel: audit: type=1130 audit(1761957775.536:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:55.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:55.537308 systemd[1]: Starting ignition-fetch-offline.service... Nov 1 00:42:55.595000 audit: BPF prog-id=9 op=LOAD Nov 1 00:42:55.618516 kernel: audit: type=1334 audit(1761957775.595:24): prog-id=9 op=LOAD Nov 1 00:42:55.596399 systemd[1]: Starting systemd-networkd.service... Nov 1 00:42:55.633692 systemd-networkd[878]: lo: Link UP Nov 1 00:42:55.633695 systemd-networkd[878]: lo: Gained carrier Nov 1 00:42:55.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:55.660267 ignition[866]: Ignition 2.14.0 Nov 1 00:42:55.716754 kernel: audit: type=1130 audit(1761957775.649:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:55.634034 systemd-networkd[878]: Enumeration completed Nov 1 00:42:55.660271 ignition[866]: Stage: fetch-offline Nov 1 00:42:55.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:55.634105 systemd[1]: Started systemd-networkd.service. Nov 1 00:42:55.867953 kernel: audit: type=1130 audit(1761957775.731:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:55.867967 kernel: audit: type=1130 audit(1761957775.793:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:55.867976 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Nov 1 00:42:55.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:55.660299 ignition[866]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:42:55.905759 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f1np1: link becomes ready Nov 1 00:42:55.634758 systemd-networkd[878]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:42:55.660313 ignition[866]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Nov 1 00:42:55.649579 systemd[1]: Reached target network.target. Nov 1 00:42:55.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:55.668242 ignition[866]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 00:42:55.672246 unknown[866]: fetched base config from "system" Nov 1 00:42:55.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:55.972622 iscsid[900]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Nov 1 00:42:55.972622 iscsid[900]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Nov 1 00:42:55.972622 iscsid[900]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Nov 1 00:42:55.972622 iscsid[900]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Nov 1 00:42:55.972622 iscsid[900]: If using hardware iscsi like qla4xxx this message can be ignored. Nov 1 00:42:55.972622 iscsid[900]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Nov 1 00:42:55.972622 iscsid[900]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Nov 1 00:42:56.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:55.668313 ignition[866]: parsed url from cmdline: "" Nov 1 00:42:55.672251 unknown[866]: fetched user config from "system" Nov 1 00:42:56.141606 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Nov 1 00:42:55.668315 ignition[866]: no config URL provided Nov 1 00:42:55.711130 systemd[1]: Starting iscsiuio.service... Nov 1 00:42:55.668318 ignition[866]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:42:55.724872 systemd[1]: Started iscsiuio.service. Nov 1 00:42:55.668343 ignition[866]: parsing config with SHA512: 520a84160ebd4707e0bc3b87b7f0e3994e7fc2be326415aa4738ad2b7cf5466428735731a050503272b26e8be7e700738d7f552cc7e45d9141ad85d0ce5cf09c Nov 1 00:42:55.731961 systemd[1]: Finished ignition-fetch-offline.service. Nov 1 00:42:55.672529 ignition[866]: fetch-offline: fetch-offline passed Nov 1 00:42:55.793759 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 1 00:42:55.672532 ignition[866]: POST message to Packet Timeline Nov 1 00:42:55.794212 systemd[1]: Starting ignition-kargs.service... Nov 1 00:42:55.672537 ignition[866]: POST Status error: resource requires networking Nov 1 00:42:55.870960 systemd-networkd[878]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:42:55.672574 ignition[866]: Ignition finished successfully Nov 1 00:42:55.883115 systemd[1]: Starting iscsid.service... Nov 1 00:42:55.872310 ignition[889]: Ignition 2.14.0 Nov 1 00:42:55.912812 systemd[1]: Started iscsid.service. Nov 1 00:42:55.872314 ignition[889]: Stage: kargs Nov 1 00:42:55.934094 systemd[1]: Starting dracut-initqueue.service... Nov 1 00:42:55.872369 ignition[889]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:42:55.948879 systemd[1]: Finished dracut-initqueue.service. Nov 1 00:42:55.872379 ignition[889]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Nov 1 00:42:55.959830 systemd[1]: Reached target remote-fs-pre.target. Nov 1 00:42:55.873766 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 00:42:55.980580 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 00:42:55.875135 ignition[889]: kargs: kargs passed Nov 1 00:42:55.980695 systemd[1]: Reached target remote-fs.target. Nov 1 00:42:55.875139 ignition[889]: POST message to Packet Timeline Nov 1 00:42:56.033487 systemd[1]: Starting dracut-pre-mount.service... Nov 1 00:42:55.875150 ignition[889]: GET https://metadata.packet.net/metadata: attempt #1 Nov 1 00:42:56.062220 systemd[1]: Finished dracut-pre-mount.service. Nov 1 00:42:55.879431 ignition[889]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:39239->[::1]:53: read: connection refused Nov 1 00:42:56.138513 systemd-networkd[878]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:42:56.083096 ignition[889]: GET https://metadata.packet.net/metadata: attempt #2 Nov 1 00:42:56.166810 systemd-networkd[878]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:42:56.083433 ignition[889]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:46336->[::1]:53: read: connection refused Nov 1 00:42:56.195150 systemd-networkd[878]: enp1s0f1np1: Link UP Nov 1 00:42:56.195360 systemd-networkd[878]: enp1s0f1np1: Gained carrier Nov 1 00:42:56.208970 systemd-networkd[878]: enp1s0f0np0: Link UP Nov 1 00:42:56.209297 systemd-networkd[878]: eno2: Link UP Nov 1 00:42:56.209622 systemd-networkd[878]: eno1: Link UP Nov 1 00:42:56.484257 ignition[889]: GET https://metadata.packet.net/metadata: attempt #3 Nov 1 00:42:56.485459 ignition[889]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:53105->[::1]:53: read: connection refused Nov 1 00:42:56.918702 systemd-networkd[878]: enp1s0f0np0: Gained carrier Nov 1 00:42:56.927748 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f0np0: link becomes ready Nov 1 00:42:56.945686 systemd-networkd[878]: enp1s0f0np0: DHCPv4 address 145.40.82.49/31, gateway 145.40.82.48 acquired from 145.40.83.140 Nov 1 00:42:57.285849 ignition[889]: GET https://metadata.packet.net/metadata: attempt #4 Nov 1 00:42:57.287104 ignition[889]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:37108->[::1]:53: read: connection refused Nov 1 00:42:57.662999 systemd-networkd[878]: enp1s0f1np1: Gained IPv6LL Nov 1 00:42:58.751096 systemd-networkd[878]: enp1s0f0np0: Gained IPv6LL Nov 1 00:42:58.888786 ignition[889]: GET https://metadata.packet.net/metadata: attempt #5 Nov 1 00:42:58.890342 ignition[889]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:49482->[::1]:53: read: connection refused Nov 1 00:43:02.093559 ignition[889]: GET https://metadata.packet.net/metadata: attempt #6 Nov 1 00:43:03.253374 ignition[889]: GET result: OK Nov 1 00:43:04.599633 ignition[889]: Ignition finished successfully Nov 1 00:43:04.604288 systemd[1]: Finished ignition-kargs.service. Nov 1 00:43:04.692755 kernel: kauditd_printk_skb: 3 callbacks suppressed Nov 1 00:43:04.692771 kernel: audit: type=1130 audit(1761957784.615:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:04.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:04.624726 ignition[920]: Ignition 2.14.0 Nov 1 00:43:04.617928 systemd[1]: Starting ignition-disks.service... Nov 1 00:43:04.624730 ignition[920]: Stage: disks Nov 1 00:43:04.624787 ignition[920]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:43:04.624797 ignition[920]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Nov 1 00:43:04.626625 ignition[920]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 00:43:04.627224 ignition[920]: disks: disks passed Nov 1 00:43:04.627227 ignition[920]: POST message to Packet Timeline Nov 1 00:43:04.627237 ignition[920]: GET https://metadata.packet.net/metadata: attempt #1 Nov 1 00:43:05.875845 ignition[920]: GET result: OK Nov 1 00:43:06.314622 ignition[920]: Ignition finished successfully Nov 1 00:43:06.317270 systemd[1]: Finished ignition-disks.service. Nov 1 00:43:06.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:06.332133 systemd[1]: Reached target initrd-root-device.target. Nov 1 00:43:06.411719 kernel: audit: type=1130 audit(1761957786.331:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:06.396715 systemd[1]: Reached target local-fs-pre.target. Nov 1 00:43:06.396753 systemd[1]: Reached target local-fs.target. Nov 1 00:43:06.420692 systemd[1]: Reached target sysinit.target. Nov 1 00:43:06.420727 systemd[1]: Reached target basic.target. Nov 1 00:43:06.442333 systemd[1]: Starting systemd-fsck-root.service... Nov 1 00:43:06.461892 systemd-fsck[936]: ROOT: clean, 637/553520 files, 56032/553472 blocks Nov 1 00:43:06.481480 systemd[1]: Finished systemd-fsck-root.service. Nov 1 00:43:06.574366 kernel: audit: type=1130 audit(1761957786.489:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:06.574382 kernel: EXT4-fs (sdb9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Nov 1 00:43:06.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:06.490157 systemd[1]: Mounting sysroot.mount... Nov 1 00:43:06.582144 systemd[1]: Mounted sysroot.mount. Nov 1 00:43:06.595775 systemd[1]: Reached target initrd-root-fs.target. Nov 1 00:43:06.604297 systemd[1]: Mounting sysroot-usr.mount... Nov 1 00:43:06.618340 systemd[1]: Starting flatcar-metadata-hostname.service... Nov 1 00:43:06.638284 systemd[1]: Starting flatcar-static-network.service... Nov 1 00:43:06.652712 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 00:43:06.652729 systemd[1]: Reached target ignition-diskful.target. Nov 1 00:43:06.669419 systemd[1]: Mounted sysroot-usr.mount. Nov 1 00:43:06.696248 systemd[1]: Mounting sysroot-usr-share-oem.mount... Nov 1 00:43:06.846737 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sdb6 scanned by mount (949) Nov 1 00:43:06.846755 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:43:06.846763 kernel: BTRFS info (device sdb6): using free space tree Nov 1 00:43:06.846774 kernel: BTRFS info (device sdb6): has skinny extents Nov 1 00:43:06.846786 kernel: BTRFS info (device sdb6): enabling ssd optimizations Nov 1 00:43:06.846851 coreos-metadata[944]: Nov 01 00:43:06.774 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 1 00:43:06.910737 kernel: audit: type=1130 audit(1761957786.855:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:06.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:06.910777 coreos-metadata[943]: Nov 01 00:43:06.774 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 1 00:43:06.708280 systemd[1]: Starting initrd-setup-root.service... Nov 1 00:43:06.772253 systemd[1]: Finished initrd-setup-root.service. Nov 1 00:43:06.952612 initrd-setup-root[954]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 00:43:06.856856 systemd[1]: Mounted sysroot-usr-share-oem.mount. Nov 1 00:43:06.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:07.005893 initrd-setup-root[962]: cut: /sysroot/etc/group: No such file or directory Nov 1 00:43:07.051716 kernel: audit: type=1130 audit(1761957786.977:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:06.920176 systemd[1]: Starting ignition-mount.service... Nov 1 00:43:07.058727 initrd-setup-root[970]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 00:43:07.068711 ignition[1017]: INFO : Ignition 2.14.0 Nov 1 00:43:07.068711 ignition[1017]: INFO : Stage: mount Nov 1 00:43:07.068711 ignition[1017]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:43:07.068711 ignition[1017]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Nov 1 00:43:07.068711 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 00:43:07.068711 ignition[1017]: INFO : mount: mount passed Nov 1 00:43:07.068711 ignition[1017]: INFO : POST message to Packet Timeline Nov 1 00:43:07.068711 ignition[1017]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 1 00:43:06.946104 systemd[1]: Starting sysroot-boot.service... Nov 1 00:43:07.156797 initrd-setup-root[978]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 00:43:06.960180 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Nov 1 00:43:06.960241 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Nov 1 00:43:06.965174 systemd[1]: Finished sysroot-boot.service. Nov 1 00:43:07.891635 coreos-metadata[943]: Nov 01 00:43:07.891 INFO Fetch successful Nov 1 00:43:07.924601 coreos-metadata[943]: Nov 01 00:43:07.924 INFO wrote hostname ci-3510.3.8-n-3bc793b712 to /sysroot/etc/hostname Nov 1 00:43:07.937611 coreos-metadata[944]: Nov 01 00:43:07.935 INFO Fetch successful Nov 1 00:43:07.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:07.925145 systemd[1]: Finished flatcar-metadata-hostname.service. Nov 1 00:43:08.041597 kernel: audit: type=1130 audit(1761957787.945:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:08.041611 kernel: audit: type=1130 audit(1761957788.011:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:08.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:08.041655 ignition[1017]: INFO : GET result: OK Nov 1 00:43:08.133698 kernel: audit: type=1131 audit(1761957788.011:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:08.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:07.965674 systemd[1]: flatcar-static-network.service: Deactivated successfully. Nov 1 00:43:07.965716 systemd[1]: Finished flatcar-static-network.service. Nov 1 00:43:08.403439 ignition[1017]: INFO : Ignition finished successfully Nov 1 00:43:08.404351 systemd[1]: Finished ignition-mount.service. Nov 1 00:43:08.476532 kernel: audit: type=1130 audit(1761957788.419:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:08.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:08.420633 systemd[1]: Starting ignition-files.service... Nov 1 00:43:08.485335 systemd[1]: Mounting sysroot-usr-share-oem.mount... Nov 1 00:43:08.609592 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sdb6 scanned by mount (1036) Nov 1 00:43:08.609603 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:43:08.609611 kernel: BTRFS info (device sdb6): using free space tree Nov 1 00:43:08.609617 kernel: BTRFS info (device sdb6): has skinny extents Nov 1 00:43:08.609625 kernel: BTRFS info (device sdb6): enabling ssd optimizations Nov 1 00:43:08.623256 systemd[1]: Mounted sysroot-usr-share-oem.mount. Nov 1 00:43:08.639640 ignition[1055]: INFO : Ignition 2.14.0 Nov 1 00:43:08.639640 ignition[1055]: INFO : Stage: files Nov 1 00:43:08.639640 ignition[1055]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:43:08.639640 ignition[1055]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Nov 1 00:43:08.639640 ignition[1055]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 00:43:08.643559 unknown[1055]: wrote ssh authorized keys file for user: core Nov 1 00:43:08.704598 ignition[1055]: DEBUG : files: compiled without relabeling support, skipping Nov 1 00:43:08.704598 ignition[1055]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 00:43:08.704598 ignition[1055]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 00:43:08.704598 ignition[1055]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 00:43:08.704598 ignition[1055]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 00:43:08.704598 ignition[1055]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 00:43:08.704598 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 00:43:08.704598 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 1 00:43:08.704598 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 1 00:43:08.817632 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 00:43:08.817632 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 1 00:43:08.817632 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 00:43:08.817632 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:43:08.817632 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:43:08.817632 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:43:08.817632 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:43:08.817632 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:43:08.817632 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:43:08.817632 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:43:08.817632 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:43:08.817632 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:43:08.817632 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:43:09.035834 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Nov 1 00:43:09.035834 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(a): oem config not found in "/usr/share/oem", looking on oem partition Nov 1 00:43:09.035834 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4007926008" Nov 1 00:43:09.035834 ignition[1055]: CRITICAL : files: createFilesystemsFiles: createFiles: op(a): op(b): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4007926008": device or resource busy Nov 1 00:43:09.035834 ignition[1055]: ERROR : files: createFilesystemsFiles: createFiles: op(a): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4007926008", trying btrfs: device or resource busy Nov 1 00:43:09.035834 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4007926008" Nov 1 00:43:09.035834 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4007926008" Nov 1 00:43:09.035834 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [started] unmounting "/mnt/oem4007926008" Nov 1 00:43:09.035834 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [finished] unmounting "/mnt/oem4007926008" Nov 1 00:43:09.035834 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Nov 1 00:43:09.035834 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:43:09.035834 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 1 00:43:08.830660 systemd[1]: mnt-oem4007926008.mount: Deactivated successfully. Nov 1 00:43:09.307858 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET result: OK Nov 1 00:43:09.582326 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:43:09.582326 ignition[1055]: INFO : files: op(f): [started] processing unit "packet-phone-home.service" Nov 1 00:43:09.582326 ignition[1055]: INFO : files: op(f): [finished] processing unit "packet-phone-home.service" Nov 1 00:43:09.582326 ignition[1055]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Nov 1 00:43:09.582326 ignition[1055]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Nov 1 00:43:09.582326 ignition[1055]: INFO : files: op(11): [started] processing unit "prepare-helm.service" Nov 1 00:43:09.582326 ignition[1055]: INFO : files: op(11): op(12): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:43:09.676809 ignition[1055]: INFO : files: op(11): op(12): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:43:09.676809 ignition[1055]: INFO : files: op(11): [finished] processing unit "prepare-helm.service" Nov 1 00:43:09.676809 ignition[1055]: INFO : files: op(13): [started] setting preset to enabled for "packet-phone-home.service" Nov 1 00:43:09.676809 ignition[1055]: INFO : files: op(13): [finished] setting preset to enabled for "packet-phone-home.service" Nov 1 00:43:09.676809 ignition[1055]: INFO : files: op(14): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Nov 1 00:43:09.676809 ignition[1055]: INFO : files: op(14): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Nov 1 00:43:09.676809 ignition[1055]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Nov 1 00:43:09.676809 ignition[1055]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 00:43:09.676809 ignition[1055]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:43:09.676809 ignition[1055]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:43:09.676809 ignition[1055]: INFO : files: files passed Nov 1 00:43:09.676809 ignition[1055]: INFO : POST message to Packet Timeline Nov 1 00:43:09.676809 ignition[1055]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 1 00:43:10.535950 ignition[1055]: INFO : GET result: OK Nov 1 00:43:11.029073 ignition[1055]: INFO : Ignition finished successfully Nov 1 00:43:11.049331 systemd[1]: Finished ignition-files.service. Nov 1 00:43:11.116588 kernel: audit: type=1130 audit(1761957791.057:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:11.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:11.063529 systemd[1]: Starting initrd-setup-root-after-ignition.service... Nov 1 00:43:11.124776 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Nov 1 00:43:11.158780 initrd-setup-root-after-ignition[1088]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:43:11.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:11.125091 systemd[1]: Starting ignition-quench.service... Nov 1 00:43:11.349977 kernel: audit: type=1130 audit(1761957791.168:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:11.349993 kernel: audit: type=1130 audit(1761957791.236:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:11.350003 kernel: audit: type=1131 audit(1761957791.236:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:11.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:11.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:11.141868 systemd[1]: Finished initrd-setup-root-after-ignition.service. Nov 1 00:43:11.168946 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 00:43:11.169022 systemd[1]: Finished ignition-quench.service. Nov 1 00:43:11.507732 kernel: audit: type=1130 audit(1761957791.391:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:11.507746 kernel: audit: type=1131 audit(1761957791.391:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:11.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:11.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:11.236771 systemd[1]: Reached target ignition-complete.target. Nov 1 00:43:11.359108 systemd[1]: Starting initrd-parse-etc.service... Nov 1 00:43:11.380368 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 00:43:11.380416 systemd[1]: Finished initrd-parse-etc.service. Nov 1 00:43:11.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:11.391792 systemd[1]: Reached target initrd-fs.target. Nov 1 00:43:11.635580 kernel: audit: type=1130 audit(1761957791.564:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:11.516736 systemd[1]: Reached target initrd.target. Nov 1 00:43:11.532771 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Nov 1 00:43:11.533266 systemd[1]: Starting dracut-pre-pivot.service... Nov 1 00:43:11.547863 systemd[1]: Finished dracut-pre-pivot.service. Nov 1 00:43:11.565191 systemd[1]: Starting initrd-cleanup.service... Nov 1 00:43:11.635390 systemd[1]: Stopped target nss-lookup.target. Nov 1 00:43:11.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:11.643751 systemd[1]: Stopped target remote-cryptsetup.target. Nov 1 00:43:11.788736 kernel: audit: type=1131 audit(1761957791.706:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:11.677711 systemd[1]: Stopped target timers.target. Nov 1 00:43:11.691905 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 00:43:11.692047 systemd[1]: Stopped dracut-pre-pivot.service. Nov 1 00:43:11.707050 systemd[1]: Stopped target initrd.target. Nov 1 00:43:11.779808 systemd[1]: Stopped target basic.target. Nov 1 00:43:11.795760 systemd[1]: Stopped target ignition-complete.target. Nov 1 00:43:11.818821 systemd[1]: Stopped target ignition-diskful.target. Nov 1 00:43:11.835805 systemd[1]: Stopped target initrd-root-device.target. Nov 1 00:43:11.850864 systemd[1]: Stopped target remote-fs.target. Nov 1 00:43:11.866058 systemd[1]: Stopped target remote-fs-pre.target. Nov 1 00:43:11.882122 systemd[1]: Stopped target sysinit.target. Nov 1 00:43:11.898141 systemd[1]: Stopped target local-fs.target. Nov 1 00:43:11.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:11.915132 systemd[1]: Stopped target local-fs-pre.target. Nov 1 00:43:12.043731 kernel: audit: type=1131 audit(1761957791.959:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:11.930115 systemd[1]: Stopped target swap.target. Nov 1 00:43:12.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:12.113549 kernel: audit: type=1131 audit(1761957792.052:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:11.945114 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 00:43:12.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:11.945520 systemd[1]: Stopped dracut-pre-mount.service. Nov 1 00:43:11.960339 systemd[1]: Stopped target cryptsetup.target. Nov 1 00:43:12.036793 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 00:43:12.036879 systemd[1]: Stopped dracut-initqueue.service. Nov 1 00:43:12.052868 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 00:43:12.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:12.052947 systemd[1]: Stopped ignition-fetch-offline.service. Nov 1 00:43:12.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:12.121944 systemd[1]: Stopped target paths.target. Nov 1 00:43:12.234000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:12.137749 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 00:43:12.260737 ignition[1103]: INFO : Ignition 2.14.0 Nov 1 00:43:12.260737 ignition[1103]: INFO : Stage: umount Nov 1 00:43:12.260737 ignition[1103]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:43:12.260737 ignition[1103]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Nov 1 00:43:12.260737 ignition[1103]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 00:43:12.260737 ignition[1103]: INFO : umount: umount passed Nov 1 00:43:12.260737 ignition[1103]: INFO : POST message to Packet Timeline Nov 1 00:43:12.260737 ignition[1103]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 1 00:43:12.275000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:12.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:12.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:12.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:12.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:12.399292 iscsid[900]: iscsid shutting down. Nov 1 00:43:12.141736 systemd[1]: Stopped systemd-ask-password-console.path. Nov 1 00:43:12.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:12.144822 systemd[1]: Stopped target slices.target. Nov 1 00:43:12.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:12.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:12.158819 systemd[1]: Stopped target sockets.target. Nov 1 00:43:12.180814 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 00:43:12.180949 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Nov 1 00:43:12.198019 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 00:43:12.198181 systemd[1]: Stopped ignition-files.service. Nov 1 00:43:12.218226 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 1 00:43:12.218659 systemd[1]: Stopped flatcar-metadata-hostname.service. Nov 1 00:43:12.237337 systemd[1]: Stopping ignition-mount.service... Nov 1 00:43:12.249729 systemd[1]: Stopping iscsid.service... Nov 1 00:43:12.267673 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 00:43:12.267775 systemd[1]: Stopped kmod-static-nodes.service. Nov 1 00:43:12.276636 systemd[1]: Stopping sysroot-boot.service... Nov 1 00:43:12.287674 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 00:43:12.287814 systemd[1]: Stopped systemd-udev-trigger.service. Nov 1 00:43:12.305201 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 00:43:12.305447 systemd[1]: Stopped dracut-pre-trigger.service. Nov 1 00:43:12.344489 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 00:43:12.344822 systemd[1]: iscsid.service: Deactivated successfully. Nov 1 00:43:12.344871 systemd[1]: Stopped iscsid.service. Nov 1 00:43:12.357929 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 00:43:12.358158 systemd[1]: Stopped sysroot-boot.service. Nov 1 00:43:12.374033 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 00:43:12.374318 systemd[1]: Closed iscsid.socket. Nov 1 00:43:12.388953 systemd[1]: Stopping iscsiuio.service... Nov 1 00:43:12.406249 systemd[1]: iscsiuio.service: Deactivated successfully. Nov 1 00:43:12.406478 systemd[1]: Stopped iscsiuio.service. Nov 1 00:43:12.421459 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 00:43:12.421704 systemd[1]: Finished initrd-cleanup.service. Nov 1 00:43:12.441029 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 00:43:12.441130 systemd[1]: Closed iscsiuio.socket. Nov 1 00:43:13.296919 ignition[1103]: INFO : GET result: OK Nov 1 00:43:13.881198 ignition[1103]: INFO : Ignition finished successfully Nov 1 00:43:13.884235 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 00:43:13.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:13.884512 systemd[1]: Stopped ignition-mount.service. Nov 1 00:43:13.899122 systemd[1]: Stopped target network.target. Nov 1 00:43:13.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:13.914717 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 00:43:13.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:13.914860 systemd[1]: Stopped ignition-disks.service. Nov 1 00:43:13.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:13.929823 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 00:43:13.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:13.929949 systemd[1]: Stopped ignition-kargs.service. Nov 1 00:43:13.944841 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 00:43:13.944971 systemd[1]: Stopped ignition-setup.service. Nov 1 00:43:14.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:13.963923 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 00:43:14.046000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:14.047000 audit: BPF prog-id=6 op=UNLOAD Nov 1 00:43:13.964073 systemd[1]: Stopped initrd-setup-root.service. Nov 1 00:43:13.982295 systemd[1]: Stopping systemd-networkd.service... Nov 1 00:43:13.992704 systemd-networkd[878]: enp1s0f1np1: DHCPv6 lease lost Nov 1 00:43:14.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:13.998948 systemd[1]: Stopping systemd-resolved.service... Nov 1 00:43:14.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:14.001723 systemd-networkd[878]: enp1s0f0np0: DHCPv6 lease lost Nov 1 00:43:14.121000 audit: BPF prog-id=9 op=UNLOAD Nov 1 00:43:14.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:14.014330 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 00:43:14.014601 systemd[1]: Stopped systemd-resolved.service. Nov 1 00:43:14.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:14.032170 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:43:14.032396 systemd[1]: Stopped systemd-networkd.service. Nov 1 00:43:14.047183 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 00:43:14.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:14.047272 systemd[1]: Closed systemd-networkd.socket. Nov 1 00:43:14.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:14.066408 systemd[1]: Stopping network-cleanup.service... Nov 1 00:43:14.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:14.079728 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 00:43:14.079955 systemd[1]: Stopped parse-ip-for-networkd.service. Nov 1 00:43:14.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:14.097943 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:43:14.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:14.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:14.098068 systemd[1]: Stopped systemd-sysctl.service. Nov 1 00:43:14.114171 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 00:43:14.114306 systemd[1]: Stopped systemd-modules-load.service. Nov 1 00:43:14.130151 systemd[1]: Stopping systemd-udevd.service... Nov 1 00:43:14.149452 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 1 00:43:14.150899 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 00:43:14.151285 systemd[1]: Stopped systemd-udevd.service. Nov 1 00:43:14.165153 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 00:43:14.165280 systemd[1]: Closed systemd-udevd-control.socket. Nov 1 00:43:14.178887 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 00:43:14.178992 systemd[1]: Closed systemd-udevd-kernel.socket. Nov 1 00:43:14.195791 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 00:43:14.195915 systemd[1]: Stopped dracut-pre-udev.service. Nov 1 00:43:14.210851 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 00:43:14.210970 systemd[1]: Stopped dracut-cmdline.service. Nov 1 00:43:14.227819 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:43:14.227935 systemd[1]: Stopped dracut-cmdline-ask.service. Nov 1 00:43:14.454000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:14.244400 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Nov 1 00:43:14.249777 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:43:14.249803 systemd[1]: Stopped systemd-vconsole-setup.service. Nov 1 00:43:14.273798 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 00:43:14.273856 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Nov 1 00:43:14.440766 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 00:43:14.441003 systemd[1]: Stopped network-cleanup.service. Nov 1 00:43:14.454991 systemd[1]: Reached target initrd-switch-root.target. Nov 1 00:43:14.473271 systemd[1]: Starting initrd-switch-root.service... Nov 1 00:43:14.510238 systemd[1]: Switching root. Nov 1 00:43:14.571241 systemd-journald[268]: Journal stopped Nov 1 00:43:18.656234 systemd-journald[268]: Received SIGTERM from PID 1 (n/a). Nov 1 00:43:18.656249 kernel: SELinux: Class mctp_socket not defined in policy. Nov 1 00:43:18.656258 kernel: SELinux: Class anon_inode not defined in policy. Nov 1 00:43:18.656264 kernel: SELinux: the above unknown classes and permissions will be allowed Nov 1 00:43:18.656269 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 00:43:18.656274 kernel: SELinux: policy capability open_perms=1 Nov 1 00:43:18.656280 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 00:43:18.656286 kernel: SELinux: policy capability always_check_network=0 Nov 1 00:43:18.656292 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 00:43:18.656299 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 00:43:18.656304 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 00:43:18.656310 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 00:43:18.656315 systemd[1]: Successfully loaded SELinux policy in 319.393ms. Nov 1 00:43:18.656322 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.647ms. Nov 1 00:43:18.656331 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 00:43:18.656338 systemd[1]: Detected architecture x86-64. Nov 1 00:43:18.656344 systemd[1]: Detected first boot. Nov 1 00:43:18.656350 systemd[1]: Hostname set to . Nov 1 00:43:18.656356 systemd[1]: Initializing machine ID from random generator. Nov 1 00:43:18.656362 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Nov 1 00:43:18.656368 systemd[1]: Populated /etc with preset unit settings. Nov 1 00:43:18.656375 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:43:18.656382 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:43:18.656389 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:43:18.656395 kernel: kauditd_printk_skb: 49 callbacks suppressed Nov 1 00:43:18.656401 kernel: audit: type=1334 audit(1761957796.959:92): prog-id=12 op=LOAD Nov 1 00:43:18.656407 kernel: audit: type=1334 audit(1761957796.959:93): prog-id=3 op=UNLOAD Nov 1 00:43:18.656414 kernel: audit: type=1334 audit(1761957797.004:94): prog-id=13 op=LOAD Nov 1 00:43:18.656419 kernel: audit: type=1334 audit(1761957797.049:95): prog-id=14 op=LOAD Nov 1 00:43:18.656426 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 1 00:43:18.656432 kernel: audit: type=1334 audit(1761957797.049:96): prog-id=4 op=UNLOAD Nov 1 00:43:18.656438 kernel: audit: type=1334 audit(1761957797.049:97): prog-id=5 op=UNLOAD Nov 1 00:43:18.656443 systemd[1]: Stopped initrd-switch-root.service. Nov 1 00:43:18.656450 kernel: audit: type=1131 audit(1761957797.050:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:18.656456 kernel: audit: type=1334 audit(1761957797.212:99): prog-id=12 op=UNLOAD Nov 1 00:43:18.656461 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 1 00:43:18.656469 kernel: audit: type=1130 audit(1761957797.219:100): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:18.656475 systemd[1]: Created slice system-addon\x2dconfig.slice. Nov 1 00:43:18.656482 kernel: audit: type=1131 audit(1761957797.219:101): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:18.656488 systemd[1]: Created slice system-addon\x2drun.slice. Nov 1 00:43:18.656499 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Nov 1 00:43:18.656506 systemd[1]: Created slice system-getty.slice. Nov 1 00:43:18.656514 systemd[1]: Created slice system-modprobe.slice. Nov 1 00:43:18.656521 systemd[1]: Created slice system-serial\x2dgetty.slice. Nov 1 00:43:18.656527 systemd[1]: Created slice system-system\x2dcloudinit.slice. Nov 1 00:43:18.656556 systemd[1]: Created slice system-systemd\x2dfsck.slice. Nov 1 00:43:18.656562 systemd[1]: Created slice user.slice. Nov 1 00:43:18.656569 systemd[1]: Started systemd-ask-password-console.path. Nov 1 00:43:18.656593 systemd[1]: Started systemd-ask-password-wall.path. Nov 1 00:43:18.656600 systemd[1]: Set up automount boot.automount. Nov 1 00:43:18.656606 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Nov 1 00:43:18.656614 systemd[1]: Stopped target initrd-switch-root.target. Nov 1 00:43:18.656621 systemd[1]: Stopped target initrd-fs.target. Nov 1 00:43:18.656627 systemd[1]: Stopped target initrd-root-fs.target. Nov 1 00:43:18.656634 systemd[1]: Reached target integritysetup.target. Nov 1 00:43:18.656640 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 00:43:18.656646 systemd[1]: Reached target remote-fs.target. Nov 1 00:43:18.656653 systemd[1]: Reached target slices.target. Nov 1 00:43:18.656660 systemd[1]: Reached target swap.target. Nov 1 00:43:18.656667 systemd[1]: Reached target torcx.target. Nov 1 00:43:18.656674 systemd[1]: Reached target veritysetup.target. Nov 1 00:43:18.656680 systemd[1]: Listening on systemd-coredump.socket. Nov 1 00:43:18.656687 systemd[1]: Listening on systemd-initctl.socket. Nov 1 00:43:18.656693 systemd[1]: Listening on systemd-networkd.socket. Nov 1 00:43:18.656700 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 00:43:18.656708 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 00:43:18.656714 systemd[1]: Listening on systemd-userdbd.socket. Nov 1 00:43:18.656721 systemd[1]: Mounting dev-hugepages.mount... Nov 1 00:43:18.656728 systemd[1]: Mounting dev-mqueue.mount... Nov 1 00:43:18.656734 systemd[1]: Mounting media.mount... Nov 1 00:43:18.656741 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:43:18.656748 systemd[1]: Mounting sys-kernel-debug.mount... Nov 1 00:43:18.656755 systemd[1]: Mounting sys-kernel-tracing.mount... Nov 1 00:43:18.656762 systemd[1]: Mounting tmp.mount... Nov 1 00:43:18.656769 systemd[1]: Starting flatcar-tmpfiles.service... Nov 1 00:43:18.656776 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:43:18.656783 systemd[1]: Starting kmod-static-nodes.service... Nov 1 00:43:18.656789 systemd[1]: Starting modprobe@configfs.service... Nov 1 00:43:18.656796 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:43:18.656802 systemd[1]: Starting modprobe@drm.service... Nov 1 00:43:18.656809 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:43:18.656816 systemd[1]: Starting modprobe@fuse.service... Nov 1 00:43:18.656823 kernel: fuse: init (API version 7.34) Nov 1 00:43:18.656830 systemd[1]: Starting modprobe@loop.service... Nov 1 00:43:18.656836 kernel: loop: module loaded Nov 1 00:43:18.656843 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 00:43:18.656849 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 1 00:43:18.656856 systemd[1]: Stopped systemd-fsck-root.service. Nov 1 00:43:18.656863 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 1 00:43:18.656869 systemd[1]: Stopped systemd-fsck-usr.service. Nov 1 00:43:18.656877 systemd[1]: Stopped systemd-journald.service. Nov 1 00:43:18.656884 systemd[1]: Starting systemd-journald.service... Nov 1 00:43:18.656891 systemd[1]: Starting systemd-modules-load.service... Nov 1 00:43:18.656899 systemd-journald[1256]: Journal started Nov 1 00:43:18.656924 systemd-journald[1256]: Runtime Journal (/run/log/journal/8ac4584d06214b45bd260fb89f734b07) is 8.0M, max 640.1M, 632.1M free. Nov 1 00:43:15.031000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 00:43:15.312000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Nov 1 00:43:15.315000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 00:43:15.315000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 00:43:15.315000 audit: BPF prog-id=10 op=LOAD Nov 1 00:43:15.315000 audit: BPF prog-id=10 op=UNLOAD Nov 1 00:43:15.315000 audit: BPF prog-id=11 op=LOAD Nov 1 00:43:15.315000 audit: BPF prog-id=11 op=UNLOAD Nov 1 00:43:15.396000 audit[1145]: AVC avc: denied { associate } for pid=1145 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Nov 1 00:43:15.396000 audit[1145]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001a58d2 a1=c00002ce58 a2=c00002b100 a3=32 items=0 ppid=1128 pid=1145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:15.396000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Nov 1 00:43:15.422000 audit[1145]: AVC avc: denied { associate } for pid=1145 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Nov 1 00:43:15.422000 audit[1145]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001a59a9 a2=1ed a3=0 items=2 ppid=1128 pid=1145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:15.422000 audit: CWD cwd="/" Nov 1 00:43:15.422000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:15.422000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:15.422000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Nov 1 00:43:16.959000 audit: BPF prog-id=12 op=LOAD Nov 1 00:43:16.959000 audit: BPF prog-id=3 op=UNLOAD Nov 1 00:43:17.004000 audit: BPF prog-id=13 op=LOAD Nov 1 00:43:17.049000 audit: BPF prog-id=14 op=LOAD Nov 1 00:43:17.049000 audit: BPF prog-id=4 op=UNLOAD Nov 1 00:43:17.049000 audit: BPF prog-id=5 op=UNLOAD Nov 1 00:43:17.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:17.212000 audit: BPF prog-id=12 op=UNLOAD Nov 1 00:43:17.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:17.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:18.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:18.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:18.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:18.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:18.628000 audit: BPF prog-id=15 op=LOAD Nov 1 00:43:18.629000 audit: BPF prog-id=16 op=LOAD Nov 1 00:43:18.629000 audit: BPF prog-id=17 op=LOAD Nov 1 00:43:18.629000 audit: BPF prog-id=13 op=UNLOAD Nov 1 00:43:18.629000 audit: BPF prog-id=14 op=UNLOAD Nov 1 00:43:18.653000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Nov 1 00:43:18.653000 audit[1256]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffedad69990 a2=4000 a3=7ffedad69a2c items=0 ppid=1 pid=1256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:18.653000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Nov 1 00:43:16.958047 systemd[1]: Queued start job for default target multi-user.target. Nov 1 00:43:15.393116 /usr/lib/systemd/system-generators/torcx-generator[1145]: time="2025-11-01T00:43:15Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:43:16.958055 systemd[1]: Unnecessary job was removed for dev-sdb6.device. Nov 1 00:43:15.393687 /usr/lib/systemd/system-generators/torcx-generator[1145]: time="2025-11-01T00:43:15Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Nov 1 00:43:17.050877 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 1 00:43:15.393700 /usr/lib/systemd/system-generators/torcx-generator[1145]: time="2025-11-01T00:43:15Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Nov 1 00:43:15.393719 /usr/lib/systemd/system-generators/torcx-generator[1145]: time="2025-11-01T00:43:15Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Nov 1 00:43:15.393725 /usr/lib/systemd/system-generators/torcx-generator[1145]: time="2025-11-01T00:43:15Z" level=debug msg="skipped missing lower profile" missing profile=oem Nov 1 00:43:15.393742 /usr/lib/systemd/system-generators/torcx-generator[1145]: time="2025-11-01T00:43:15Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Nov 1 00:43:15.393750 /usr/lib/systemd/system-generators/torcx-generator[1145]: time="2025-11-01T00:43:15Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Nov 1 00:43:15.393859 /usr/lib/systemd/system-generators/torcx-generator[1145]: time="2025-11-01T00:43:15Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Nov 1 00:43:15.393882 /usr/lib/systemd/system-generators/torcx-generator[1145]: time="2025-11-01T00:43:15Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Nov 1 00:43:15.393889 /usr/lib/systemd/system-generators/torcx-generator[1145]: time="2025-11-01T00:43:15Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Nov 1 00:43:15.394821 /usr/lib/systemd/system-generators/torcx-generator[1145]: time="2025-11-01T00:43:15Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Nov 1 00:43:15.394842 /usr/lib/systemd/system-generators/torcx-generator[1145]: time="2025-11-01T00:43:15Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Nov 1 00:43:15.394852 /usr/lib/systemd/system-generators/torcx-generator[1145]: time="2025-11-01T00:43:15Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Nov 1 00:43:15.394860 /usr/lib/systemd/system-generators/torcx-generator[1145]: time="2025-11-01T00:43:15Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Nov 1 00:43:15.394870 /usr/lib/systemd/system-generators/torcx-generator[1145]: time="2025-11-01T00:43:15Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Nov 1 00:43:15.394877 /usr/lib/systemd/system-generators/torcx-generator[1145]: time="2025-11-01T00:43:15Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Nov 1 00:43:16.593091 /usr/lib/systemd/system-generators/torcx-generator[1145]: time="2025-11-01T00:43:16Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 00:43:16.593238 /usr/lib/systemd/system-generators/torcx-generator[1145]: time="2025-11-01T00:43:16Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 00:43:16.593300 /usr/lib/systemd/system-generators/torcx-generator[1145]: time="2025-11-01T00:43:16Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 00:43:16.593402 /usr/lib/systemd/system-generators/torcx-generator[1145]: time="2025-11-01T00:43:16Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 00:43:16.593432 /usr/lib/systemd/system-generators/torcx-generator[1145]: time="2025-11-01T00:43:16Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Nov 1 00:43:16.593467 /usr/lib/systemd/system-generators/torcx-generator[1145]: time="2025-11-01T00:43:16Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Nov 1 00:43:18.687684 systemd[1]: Starting systemd-network-generator.service... Nov 1 00:43:18.709538 systemd[1]: Starting systemd-remount-fs.service... Nov 1 00:43:18.731544 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 00:43:18.752529 systemd[1]: verity-setup.service: Deactivated successfully. Nov 1 00:43:18.752548 systemd[1]: Stopped verity-setup.service. Nov 1 00:43:18.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:18.798535 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:43:18.813673 systemd[1]: Started systemd-journald.service. Nov 1 00:43:18.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:18.821061 systemd[1]: Mounted dev-hugepages.mount. Nov 1 00:43:18.828769 systemd[1]: Mounted dev-mqueue.mount. Nov 1 00:43:18.835774 systemd[1]: Mounted media.mount. Nov 1 00:43:18.842768 systemd[1]: Mounted sys-kernel-debug.mount. Nov 1 00:43:18.851748 systemd[1]: Mounted sys-kernel-tracing.mount. Nov 1 00:43:18.859734 systemd[1]: Mounted tmp.mount. Nov 1 00:43:18.866822 systemd[1]: Finished flatcar-tmpfiles.service. Nov 1 00:43:18.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:18.874836 systemd[1]: Finished kmod-static-nodes.service. Nov 1 00:43:18.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:18.882887 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 00:43:18.883010 systemd[1]: Finished modprobe@configfs.service. Nov 1 00:43:18.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:18.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:18.891975 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:43:18.892112 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:43:18.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:18.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:18.901122 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:43:18.901314 systemd[1]: Finished modprobe@drm.service. Nov 1 00:43:18.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:18.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:18.910341 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:43:18.910666 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:43:18.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:18.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:18.920371 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 00:43:18.920715 systemd[1]: Finished modprobe@fuse.service. Nov 1 00:43:18.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:18.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:18.929365 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:43:18.929747 systemd[1]: Finished modprobe@loop.service. Nov 1 00:43:18.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:18.938000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:18.939370 systemd[1]: Finished systemd-modules-load.service. Nov 1 00:43:18.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:18.949338 systemd[1]: Finished systemd-network-generator.service. Nov 1 00:43:18.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:18.958298 systemd[1]: Finished systemd-remount-fs.service. Nov 1 00:43:18.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:18.967319 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 00:43:18.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:18.977035 systemd[1]: Reached target network-pre.target. Nov 1 00:43:18.988431 systemd[1]: Mounting sys-fs-fuse-connections.mount... Nov 1 00:43:18.998217 systemd[1]: Mounting sys-kernel-config.mount... Nov 1 00:43:19.006716 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 00:43:19.007723 systemd[1]: Starting systemd-hwdb-update.service... Nov 1 00:43:19.015182 systemd[1]: Starting systemd-journal-flush.service... Nov 1 00:43:19.018866 systemd-journald[1256]: Time spent on flushing to /var/log/journal/8ac4584d06214b45bd260fb89f734b07 is 15.808ms for 1577 entries. Nov 1 00:43:19.018866 systemd-journald[1256]: System Journal (/var/log/journal/8ac4584d06214b45bd260fb89f734b07) is 8.0M, max 195.6M, 187.6M free. Nov 1 00:43:19.060083 systemd-journald[1256]: Received client request to flush runtime journal. Nov 1 00:43:19.030639 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:43:19.031119 systemd[1]: Starting systemd-random-seed.service... Nov 1 00:43:19.042622 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:43:19.043174 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:43:19.050113 systemd[1]: Starting systemd-sysusers.service... Nov 1 00:43:19.058159 systemd[1]: Starting systemd-udev-settle.service... Nov 1 00:43:19.066770 systemd[1]: Mounted sys-fs-fuse-connections.mount. Nov 1 00:43:19.075628 systemd[1]: Mounted sys-kernel-config.mount. Nov 1 00:43:19.083735 systemd[1]: Finished systemd-journal-flush.service. Nov 1 00:43:19.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:19.091762 systemd[1]: Finished systemd-random-seed.service. Nov 1 00:43:19.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:19.100714 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:43:19.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:19.109696 systemd[1]: Finished systemd-sysusers.service. Nov 1 00:43:19.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:19.118707 systemd[1]: Reached target first-boot-complete.target. Nov 1 00:43:19.126824 udevadm[1272]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 1 00:43:19.333518 systemd[1]: Finished systemd-hwdb-update.service. Nov 1 00:43:19.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:19.342000 audit: BPF prog-id=18 op=LOAD Nov 1 00:43:19.343000 audit: BPF prog-id=19 op=LOAD Nov 1 00:43:19.343000 audit: BPF prog-id=7 op=UNLOAD Nov 1 00:43:19.343000 audit: BPF prog-id=8 op=UNLOAD Nov 1 00:43:19.344315 systemd[1]: Starting systemd-udevd.service... Nov 1 00:43:19.366323 systemd-udevd[1273]: Using default interface naming scheme 'v252'. Nov 1 00:43:19.388079 systemd[1]: Started systemd-udevd.service. Nov 1 00:43:19.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:19.399072 systemd[1]: Condition check resulted in dev-ttyS1.device being skipped. Nov 1 00:43:19.399000 audit: BPF prog-id=20 op=LOAD Nov 1 00:43:19.400261 systemd[1]: Starting systemd-networkd.service... Nov 1 00:43:19.431751 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Nov 1 00:43:19.431847 kernel: ACPI: button: Sleep Button [SLPB] Nov 1 00:43:19.431874 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 1 00:43:19.431000 audit: BPF prog-id=21 op=LOAD Nov 1 00:43:19.447000 audit: BPF prog-id=22 op=LOAD Nov 1 00:43:19.447000 audit: BPF prog-id=23 op=LOAD Nov 1 00:43:19.448538 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 00:43:19.448938 systemd[1]: Starting systemd-userdbd.service... Nov 1 00:43:19.463500 kernel: ACPI: button: Power Button [PWRF] Nov 1 00:43:19.428000 audit[1338]: AVC avc: denied { confidentiality } for pid=1338 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Nov 1 00:43:19.428000 audit[1338]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=558905a7a130 a1=4d9cc a2=7f79437f4bc5 a3=5 items=42 ppid=1273 pid=1338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:19.428000 audit: CWD cwd="/" Nov 1 00:43:19.428000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:19.428000 audit: PATH item=1 name=(null) inode=22134 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:19.428000 audit: PATH item=2 name=(null) inode=22134 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:19.428000 audit: PATH item=3 name=(null) inode=22135 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:19.428000 audit: PATH item=4 name=(null) inode=22134 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:19.428000 audit: PATH item=5 name=(null) inode=22136 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:19.428000 audit: PATH item=6 name=(null) inode=22134 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:19.428000 audit: PATH item=7 name=(null) inode=22137 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:19.428000 audit: PATH item=8 name=(null) inode=22137 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:19.428000 audit: PATH item=9 name=(null) inode=22138 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:19.428000 audit: PATH item=10 name=(null) inode=22137 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:19.428000 audit: PATH item=11 name=(null) inode=22139 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:19.428000 audit: PATH item=12 name=(null) inode=22137 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:19.428000 audit: PATH item=13 name=(null) inode=22140 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:19.428000 audit: PATH item=14 name=(null) inode=22137 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:19.428000 audit: PATH item=15 name=(null) inode=22141 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:19.428000 audit: PATH item=16 name=(null) inode=22137 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:19.428000 audit: PATH item=17 name=(null) inode=22142 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:19.428000 audit: PATH item=18 name=(null) inode=22134 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:19.428000 audit: PATH item=19 name=(null) inode=22143 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:19.428000 audit: PATH item=20 name=(null) inode=22143 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:19.428000 audit: PATH item=21 name=(null) inode=22144 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:19.428000 audit: PATH item=22 name=(null) inode=22143 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:19.428000 audit: PATH item=23 name=(null) inode=22145 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:19.428000 audit: PATH item=24 name=(null) inode=22143 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:19.428000 audit: PATH item=25 name=(null) inode=22146 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:19.428000 audit: PATH item=26 name=(null) inode=22143 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:19.428000 audit: PATH item=27 name=(null) inode=22147 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:19.428000 audit: PATH item=28 name=(null) inode=22143 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:19.428000 audit: PATH item=29 name=(null) inode=22148 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:19.428000 audit: PATH item=30 name=(null) inode=22134 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:19.428000 audit: PATH item=31 name=(null) inode=22149 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:19.428000 audit: PATH item=32 name=(null) inode=22149 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:19.428000 audit: PATH item=33 name=(null) inode=22150 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:19.428000 audit: PATH item=34 name=(null) inode=22149 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:19.428000 audit: PATH item=35 name=(null) inode=22151 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:19.428000 audit: PATH item=36 name=(null) inode=22149 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:19.428000 audit: PATH item=37 name=(null) inode=22152 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:19.428000 audit: PATH item=38 name=(null) inode=22149 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:19.428000 audit: PATH item=39 name=(null) inode=22153 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:19.428000 audit: PATH item=40 name=(null) inode=22149 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:19.428000 audit: PATH item=41 name=(null) inode=22154 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:19.428000 audit: PROCTITLE proctitle="(udev-worker)" Nov 1 00:43:19.517220 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Nov 1 00:43:19.549942 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Nov 1 00:43:19.550031 kernel: IPMI message handler: version 39.2 Nov 1 00:43:19.550045 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Nov 1 00:43:19.534904 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 00:43:19.556674 systemd[1]: Started systemd-userdbd.service. Nov 1 00:43:19.562501 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Nov 1 00:43:19.594961 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Nov 1 00:43:19.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:19.613532 kernel: iTCO_vendor_support: vendor-support=0 Nov 1 00:43:19.613556 kernel: ipmi device interface Nov 1 00:43:19.649508 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Nov 1 00:43:19.680613 kernel: ipmi_si: IPMI System Interface driver Nov 1 00:43:19.680633 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Nov 1 00:43:19.680719 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Nov 1 00:43:19.729960 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Nov 1 00:43:19.729974 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Nov 1 00:43:19.729985 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Nov 1 00:43:19.810238 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Nov 1 00:43:19.810328 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Nov 1 00:43:19.810398 kernel: ipmi_si: Adding ACPI-specified kcs state machine Nov 1 00:43:19.810411 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Nov 1 00:43:19.797590 systemd-networkd[1324]: bond0: netdev ready Nov 1 00:43:19.800366 systemd-networkd[1324]: lo: Link UP Nov 1 00:43:19.800369 systemd-networkd[1324]: lo: Gained carrier Nov 1 00:43:19.800898 systemd-networkd[1324]: Enumeration completed Nov 1 00:43:19.800962 systemd[1]: Started systemd-networkd.service. Nov 1 00:43:19.801242 systemd-networkd[1324]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Nov 1 00:43:19.803670 systemd-networkd[1324]: enp1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:15:ae:cd.network. Nov 1 00:43:19.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:19.894657 kernel: intel_rapl_common: Found RAPL domain package Nov 1 00:43:19.894702 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Nov 1 00:43:19.894802 kernel: intel_rapl_common: Found RAPL domain core Nov 1 00:43:19.932715 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Nov 1 00:43:19.932824 kernel: intel_rapl_common: Found RAPL domain dram Nov 1 00:43:19.991502 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Nov 1 00:43:20.009501 kernel: ipmi_ssif: IPMI SSIF Interface driver Nov 1 00:43:20.763546 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Nov 1 00:43:20.806692 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Nov 1 00:43:20.808732 systemd-networkd[1324]: enp1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:15:ae:cc.network. Nov 1 00:43:20.831771 systemd[1]: Finished systemd-udev-settle.service. Nov 1 00:43:20.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:20.849282 systemd[1]: Starting lvm2-activation-early.service... Nov 1 00:43:20.860654 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Nov 1 00:43:20.865486 lvm[1377]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:43:20.914196 systemd[1]: Finished lvm2-activation-early.service. Nov 1 00:43:20.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:20.922837 systemd[1]: Reached target cryptsetup.target. Nov 1 00:43:20.931171 systemd[1]: Starting lvm2-activation.service... Nov 1 00:43:20.933360 lvm[1378]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:43:20.962938 systemd[1]: Finished lvm2-activation.service. Nov 1 00:43:20.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:20.980691 systemd[1]: Reached target local-fs-pre.target. Nov 1 00:43:20.989563 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Nov 1 00:43:20.996586 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 00:43:20.996601 systemd[1]: Reached target local-fs.target. Nov 1 00:43:21.004602 systemd[1]: Reached target machines.target. Nov 1 00:43:21.013216 systemd[1]: Starting ldconfig.service... Nov 1 00:43:21.019943 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:43:21.019965 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:43:21.020563 systemd[1]: Starting systemd-boot-update.service... Nov 1 00:43:21.028029 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Nov 1 00:43:21.038103 systemd[1]: Starting systemd-machine-id-commit.service... Nov 1 00:43:21.038815 systemd[1]: Starting systemd-sysext.service... Nov 1 00:43:21.039046 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1380 (bootctl) Nov 1 00:43:21.039879 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Nov 1 00:43:21.048926 systemd[1]: Unmounting usr-share-oem.mount... Nov 1 00:43:21.058981 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Nov 1 00:43:21.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:21.064469 systemd[1]: usr-share-oem.mount: Deactivated successfully. Nov 1 00:43:21.064643 systemd[1]: Unmounted usr-share-oem.mount. Nov 1 00:43:21.100547 kernel: loop0: detected capacity change from 0 to 224512 Nov 1 00:43:21.191766 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 00:43:21.192128 systemd[1]: Finished systemd-machine-id-commit.service. Nov 1 00:43:21.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:21.224546 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 00:43:21.239148 systemd-fsck[1390]: fsck.fat 4.2 (2021-01-31) Nov 1 00:43:21.239148 systemd-fsck[1390]: /dev/sdb1: 790 files, 120773/258078 clusters Nov 1 00:43:21.239898 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Nov 1 00:43:21.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:21.251481 systemd[1]: Mounting boot.mount... Nov 1 00:43:21.273542 kernel: loop1: detected capacity change from 0 to 224512 Nov 1 00:43:21.274642 systemd[1]: Mounted boot.mount. Nov 1 00:43:21.288865 (sd-sysext)[1394]: Using extensions 'kubernetes'. Nov 1 00:43:21.289050 (sd-sysext)[1394]: Merged extensions into '/usr'. Nov 1 00:43:21.298294 systemd[1]: Finished systemd-boot-update.service. Nov 1 00:43:21.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:21.306745 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:43:21.307478 systemd[1]: Mounting usr-share-oem.mount... Nov 1 00:43:21.314714 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:43:21.315338 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:43:21.323117 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:43:21.330231 systemd[1]: Starting modprobe@loop.service... Nov 1 00:43:21.336623 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:43:21.336690 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:43:21.336754 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:43:21.338347 systemd[1]: Mounted usr-share-oem.mount. Nov 1 00:43:21.345753 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:43:21.345820 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:43:21.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:21.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:21.354754 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:43:21.354818 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:43:21.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:21.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:21.362768 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:43:21.362830 systemd[1]: Finished modprobe@loop.service. Nov 1 00:43:21.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:21.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:21.370803 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:43:21.370863 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:43:21.371338 systemd[1]: Finished systemd-sysext.service. Nov 1 00:43:21.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:21.380141 systemd[1]: Starting ensure-sysext.service... Nov 1 00:43:21.387099 systemd[1]: Starting systemd-tmpfiles-setup.service... Nov 1 00:43:21.396849 systemd[1]: Reloading. Nov 1 00:43:21.398186 systemd-tmpfiles[1402]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Nov 1 00:43:21.402017 systemd-tmpfiles[1402]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 00:43:21.407194 systemd-tmpfiles[1402]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 00:43:21.418252 /usr/lib/systemd/system-generators/torcx-generator[1421]: time="2025-11-01T00:43:21Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:43:21.418268 /usr/lib/systemd/system-generators/torcx-generator[1421]: time="2025-11-01T00:43:21Z" level=info msg="torcx already run" Nov 1 00:43:21.437402 ldconfig[1379]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 00:43:21.472549 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:43:21.472561 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:43:21.484744 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:43:21.531500 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Nov 1 00:43:21.532000 audit: BPF prog-id=24 op=LOAD Nov 1 00:43:21.532000 audit: BPF prog-id=15 op=UNLOAD Nov 1 00:43:21.533000 audit: BPF prog-id=25 op=LOAD Nov 1 00:43:21.533000 audit: BPF prog-id=26 op=LOAD Nov 1 00:43:21.533000 audit: BPF prog-id=16 op=UNLOAD Nov 1 00:43:21.533000 audit: BPF prog-id=17 op=UNLOAD Nov 1 00:43:21.533000 audit: BPF prog-id=27 op=LOAD Nov 1 00:43:21.533000 audit: BPF prog-id=21 op=UNLOAD Nov 1 00:43:21.533000 audit: BPF prog-id=28 op=LOAD Nov 1 00:43:21.533000 audit: BPF prog-id=29 op=LOAD Nov 1 00:43:21.533000 audit: BPF prog-id=22 op=UNLOAD Nov 1 00:43:21.533000 audit: BPF prog-id=23 op=UNLOAD Nov 1 00:43:21.534000 audit: BPF prog-id=30 op=LOAD Nov 1 00:43:21.534000 audit: BPF prog-id=31 op=LOAD Nov 1 00:43:21.534000 audit: BPF prog-id=18 op=UNLOAD Nov 1 00:43:21.534000 audit: BPF prog-id=19 op=UNLOAD Nov 1 00:43:21.534000 audit: BPF prog-id=32 op=LOAD Nov 1 00:43:21.534000 audit: BPF prog-id=20 op=UNLOAD Nov 1 00:43:21.536904 systemd[1]: Finished ldconfig.service. Nov 1 00:43:21.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:21.544099 systemd[1]: Finished systemd-tmpfiles-setup.service. Nov 1 00:43:21.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:21.555001 systemd[1]: Starting audit-rules.service... Nov 1 00:43:21.562185 systemd[1]: Starting clean-ca-certificates.service... Nov 1 00:43:21.571269 systemd[1]: Starting systemd-journal-catalog-update.service... Nov 1 00:43:21.570000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Nov 1 00:43:21.570000 audit[1499]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdc83e2650 a2=420 a3=0 items=0 ppid=1483 pid=1499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:21.570000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Nov 1 00:43:21.571552 augenrules[1499]: No rules Nov 1 00:43:21.580643 systemd[1]: Starting systemd-resolved.service... Nov 1 00:43:21.588770 systemd[1]: Starting systemd-timesyncd.service... Nov 1 00:43:21.596136 systemd[1]: Starting systemd-update-utmp.service... Nov 1 00:43:21.603081 systemd[1]: Finished audit-rules.service. Nov 1 00:43:21.609761 systemd[1]: Finished clean-ca-certificates.service. Nov 1 00:43:21.617748 systemd[1]: Finished systemd-journal-catalog-update.service. Nov 1 00:43:21.630330 systemd[1]: Finished systemd-update-utmp.service. Nov 1 00:43:21.639211 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:43:21.639926 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:43:21.647123 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:43:21.654115 systemd[1]: Starting modprobe@loop.service... Nov 1 00:43:21.660568 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:43:21.660654 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:43:21.660777 systemd-resolved[1505]: Positive Trust Anchors: Nov 1 00:43:21.660782 systemd-resolved[1505]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:43:21.660802 systemd-resolved[1505]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 00:43:21.661413 systemd[1]: Starting systemd-update-done.service... Nov 1 00:43:21.664754 systemd-resolved[1505]: Using system hostname 'ci-3510.3.8-n-3bc793b712'. Nov 1 00:43:21.668553 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:43:21.669070 systemd[1]: Started systemd-timesyncd.service. Nov 1 00:43:21.677830 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:43:21.677899 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:43:21.685827 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:43:21.685894 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:43:21.701805 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:43:21.701873 systemd[1]: Finished modprobe@loop.service. Nov 1 00:43:21.706547 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Nov 1 00:43:21.713809 systemd[1]: Finished systemd-update-done.service. Nov 1 00:43:21.721807 systemd[1]: Reached target time-set.target. Nov 1 00:43:21.729635 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:43:21.729694 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:43:21.731178 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:43:21.731824 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:43:21.750553 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Nov 1 00:43:21.750578 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Nov 1 00:43:21.785205 systemd[1]: Starting modprobe@drm.service... Nov 1 00:43:21.788552 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 25000 Mbps full duplex Nov 1 00:43:21.788581 kernel: bond0: active interface up! Nov 1 00:43:21.820169 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:43:21.823571 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 25000 Mbps full duplex Nov 1 00:43:21.823696 systemd-networkd[1324]: bond0: Link UP Nov 1 00:43:21.823914 systemd-networkd[1324]: enp1s0f1np1: Link UP Nov 1 00:43:21.824060 systemd-networkd[1324]: enp1s0f1np1: Gained carrier Nov 1 00:43:21.825076 systemd-networkd[1324]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:15:ae:cc.network. Nov 1 00:43:21.831167 systemd[1]: Starting modprobe@loop.service... Nov 1 00:43:21.837662 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:43:21.837752 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:43:21.838390 systemd[1]: Starting systemd-networkd-wait-online.service... Nov 1 00:43:21.846714 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:43:21.847397 systemd[1]: Started systemd-resolved.service. Nov 1 00:43:21.867533 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Nov 1 00:43:21.875829 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:43:21.875897 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:43:21.883807 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:43:21.883876 systemd[1]: Finished modprobe@drm.service. Nov 1 00:43:21.891764 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:43:21.891832 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:43:21.899775 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:43:21.899841 systemd[1]: Finished modprobe@loop.service. Nov 1 00:43:21.907917 systemd[1]: Reached target network.target. Nov 1 00:43:21.915601 systemd[1]: Reached target nss-lookup.target. Nov 1 00:43:21.923582 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:43:21.923598 systemd[1]: Reached target sysinit.target. Nov 1 00:43:21.941591 systemd[1]: Started motdgen.path. Nov 1 00:43:21.948537 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Nov 1 00:43:21.948745 systemd-networkd[1324]: enp1s0f0np0: Link UP Nov 1 00:43:21.948913 systemd-networkd[1324]: bond0: Gained carrier Nov 1 00:43:21.949007 systemd-networkd[1324]: enp1s0f0np0: Gained carrier Nov 1 00:43:21.949060 systemd-timesyncd[1506]: Network configuration changed, trying to establish connection. Nov 1 00:43:21.963609 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Nov 1 00:43:21.970549 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Nov 1 00:43:21.970573 kernel: bond0: (slave enp1s0f1np1): invalid new link 1 on slave Nov 1 00:43:21.989667 systemd-timesyncd[1506]: Network configuration changed, trying to establish connection. Nov 1 00:43:21.989792 systemd-networkd[1324]: enp1s0f1np1: Link DOWN Nov 1 00:43:21.989795 systemd-networkd[1324]: enp1s0f1np1: Lost carrier Nov 1 00:43:21.997661 systemd[1]: Started logrotate.timer. Nov 1 00:43:21.997681 systemd-timesyncd[1506]: Network configuration changed, trying to establish connection. Nov 1 00:43:21.997861 systemd-timesyncd[1506]: Network configuration changed, trying to establish connection. Nov 1 00:43:22.004636 systemd[1]: Started mdadm.timer. Nov 1 00:43:22.011597 systemd[1]: Started systemd-tmpfiles-clean.timer. Nov 1 00:43:22.019529 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 00:43:22.019545 systemd[1]: Reached target paths.target. Nov 1 00:43:22.026579 systemd[1]: Reached target timers.target. Nov 1 00:43:22.033705 systemd[1]: Listening on dbus.socket. Nov 1 00:43:22.041179 systemd[1]: Starting docker.socket... Nov 1 00:43:22.049001 systemd[1]: Listening on sshd.socket. Nov 1 00:43:22.055633 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:43:22.055659 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:43:22.056244 systemd[1]: Finished ensure-sysext.service. Nov 1 00:43:22.065505 systemd[1]: Listening on docker.socket. Nov 1 00:43:22.073040 systemd[1]: Reached target sockets.target. Nov 1 00:43:22.081551 systemd[1]: Reached target basic.target. Nov 1 00:43:22.088574 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:43:22.088593 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 00:43:22.088605 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 00:43:22.089096 systemd[1]: Starting containerd.service... Nov 1 00:43:22.096014 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Nov 1 00:43:22.105099 systemd[1]: Starting coreos-metadata.service... Nov 1 00:43:22.112136 systemd[1]: Starting dbus.service... Nov 1 00:43:22.118234 systemd[1]: Starting enable-oem-cloudinit.service... Nov 1 00:43:22.122857 jq[1529]: false Nov 1 00:43:22.127541 systemd[1]: Starting extend-filesystems.service... Nov 1 00:43:22.129488 coreos-metadata[1522]: Nov 01 00:43:22.129 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 1 00:43:22.133051 dbus-daemon[1528]: [system] SELinux support is enabled Nov 1 00:43:22.135461 extend-filesystems[1530]: Found loop1 Nov 1 00:43:22.142570 extend-filesystems[1530]: Found sda Nov 1 00:43:22.142570 extend-filesystems[1530]: Found sdb Nov 1 00:43:22.142570 extend-filesystems[1530]: Found sdb1 Nov 1 00:43:22.142570 extend-filesystems[1530]: Found sdb2 Nov 1 00:43:22.142570 extend-filesystems[1530]: Found sdb3 Nov 1 00:43:22.142570 extend-filesystems[1530]: Found usr Nov 1 00:43:22.142570 extend-filesystems[1530]: Found sdb4 Nov 1 00:43:22.142570 extend-filesystems[1530]: Found sdb6 Nov 1 00:43:22.142570 extend-filesystems[1530]: Found sdb7 Nov 1 00:43:22.142570 extend-filesystems[1530]: Found sdb9 Nov 1 00:43:22.142570 extend-filesystems[1530]: Checking size of /dev/sdb9 Nov 1 00:43:22.142570 extend-filesystems[1530]: Resized partition /dev/sdb9 Nov 1 00:43:22.386636 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks Nov 1 00:43:22.386657 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Nov 1 00:43:22.386764 kernel: bond0: (slave enp1s0f1np1): link status up again after 200 ms Nov 1 00:43:22.386779 kernel: bond0: (slave enp1s0f1np1): speed changed to 0 on port 1 Nov 1 00:43:22.386790 kernel: bond0: (slave enp1s0f1np1): link status up again after 200 ms Nov 1 00:43:22.386801 kernel: bond0: (slave enp1s0f1np1): link status up again after 200 ms Nov 1 00:43:22.386811 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 25000 Mbps full duplex Nov 1 00:43:22.135585 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Nov 1 00:43:22.386878 coreos-metadata[1525]: Nov 01 00:43:22.137 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 1 00:43:22.386989 extend-filesystems[1542]: resize2fs 1.46.5 (30-Dec-2021) Nov 1 00:43:22.136358 systemd[1]: Starting motdgen.service... Nov 1 00:43:22.404104 dbus-daemon[1528]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 1 00:43:22.143609 systemd[1]: Starting prepare-helm.service... Nov 1 00:43:22.196231 systemd[1]: Starting ssh-key-proc-cmdline.service... Nov 1 00:43:22.223696 systemd-networkd[1324]: enp1s0f1np1: Link UP Nov 1 00:43:22.223910 systemd-timesyncd[1506]: Network configuration changed, trying to establish connection. Nov 1 00:43:22.223960 systemd-timesyncd[1506]: Network configuration changed, trying to establish connection. Nov 1 00:43:22.409123 update_engine[1559]: I1101 00:43:22.365347 1559 main.cc:92] Flatcar Update Engine starting Nov 1 00:43:22.409123 update_engine[1559]: I1101 00:43:22.368676 1559 update_check_scheduler.cc:74] Next update check in 8m5s Nov 1 00:43:22.223980 systemd-networkd[1324]: enp1s0f1np1: Gained carrier Nov 1 00:43:22.409306 jq[1560]: true Nov 1 00:43:22.271184 systemd[1]: Starting sshd-keygen.service... Nov 1 00:43:22.284738 systemd-timesyncd[1506]: Network configuration changed, trying to establish connection. Nov 1 00:43:22.409489 tar[1562]: linux-amd64/LICENSE Nov 1 00:43:22.409489 tar[1562]: linux-amd64/helm Nov 1 00:43:22.284869 systemd-timesyncd[1506]: Network configuration changed, trying to establish connection. Nov 1 00:43:22.409663 jq[1564]: true Nov 1 00:43:22.292105 systemd[1]: Starting systemd-logind.service... Nov 1 00:43:22.308570 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:43:22.309222 systemd[1]: Starting tcsd.service... Nov 1 00:43:22.314890 systemd-logind[1557]: Watching system buttons on /dev/input/event3 (Power Button) Nov 1 00:43:22.314900 systemd-logind[1557]: Watching system buttons on /dev/input/event2 (Sleep Button) Nov 1 00:43:22.314910 systemd-logind[1557]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Nov 1 00:43:22.315113 systemd-logind[1557]: New seat seat0. Nov 1 00:43:22.321982 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 1 00:43:22.322422 systemd[1]: Starting update-engine.service... Nov 1 00:43:22.336130 systemd[1]: Starting update-ssh-keys-after-ignition.service... Nov 1 00:43:22.350533 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:43:22.350925 systemd[1]: Started dbus.service. Nov 1 00:43:22.366583 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 00:43:22.366676 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Nov 1 00:43:22.366850 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 00:43:22.366938 systemd[1]: Finished motdgen.service. Nov 1 00:43:22.378996 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 00:43:22.379097 systemd[1]: Finished ssh-key-proc-cmdline.service. Nov 1 00:43:22.409909 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Nov 1 00:43:22.410059 systemd[1]: Condition check resulted in tcsd.service being skipped. Nov 1 00:43:22.411669 systemd[1]: Started update-engine.service. Nov 1 00:43:22.412608 env[1565]: time="2025-11-01T00:43:22.412583709Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Nov 1 00:43:22.421758 env[1565]: time="2025-11-01T00:43:22.421711529Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 00:43:22.423729 env[1565]: time="2025-11-01T00:43:22.423696694Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:43:22.424429 env[1565]: time="2025-11-01T00:43:22.424377786Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:43:22.424429 env[1565]: time="2025-11-01T00:43:22.424394571Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:43:22.424410 systemd[1]: Started systemd-logind.service. Nov 1 00:43:22.424531 env[1565]: time="2025-11-01T00:43:22.424511824Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:43:22.424531 env[1565]: time="2025-11-01T00:43:22.424522275Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 00:43:22.424531 env[1565]: time="2025-11-01T00:43:22.424529662Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Nov 1 00:43:22.424584 env[1565]: time="2025-11-01T00:43:22.424535187Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 00:43:22.424601 env[1565]: time="2025-11-01T00:43:22.424582224Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:43:22.424877 env[1565]: time="2025-11-01T00:43:22.424840625Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:43:22.425080 env[1565]: time="2025-11-01T00:43:22.425037312Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:43:22.425080 env[1565]: time="2025-11-01T00:43:22.425051002Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 00:43:22.426606 env[1565]: time="2025-11-01T00:43:22.426594847Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Nov 1 00:43:22.426606 env[1565]: time="2025-11-01T00:43:22.426605419Z" level=info msg="metadata content store policy set" policy=shared Nov 1 00:43:22.435353 systemd[1]: Started locksmithd.service. Nov 1 00:43:22.441642 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 00:43:22.441727 systemd[1]: Reached target system-config.target. Nov 1 00:43:22.446125 env[1565]: time="2025-11-01T00:43:22.446081134Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 00:43:22.446125 env[1565]: time="2025-11-01T00:43:22.446105330Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 00:43:22.446125 env[1565]: time="2025-11-01T00:43:22.446115239Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 00:43:22.446196 env[1565]: time="2025-11-01T00:43:22.446132915Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 00:43:22.446196 env[1565]: time="2025-11-01T00:43:22.446141298Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 00:43:22.446196 env[1565]: time="2025-11-01T00:43:22.446148967Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 00:43:22.446196 env[1565]: time="2025-11-01T00:43:22.446156529Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 00:43:22.446196 env[1565]: time="2025-11-01T00:43:22.446164563Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 00:43:22.446196 env[1565]: time="2025-11-01T00:43:22.446188670Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Nov 1 00:43:22.446291 env[1565]: time="2025-11-01T00:43:22.446199071Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 00:43:22.446291 env[1565]: time="2025-11-01T00:43:22.446206499Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 00:43:22.446291 env[1565]: time="2025-11-01T00:43:22.446213617Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 00:43:22.446291 env[1565]: time="2025-11-01T00:43:22.446265585Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 00:43:22.446351 env[1565]: time="2025-11-01T00:43:22.446321178Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 00:43:22.446454 env[1565]: time="2025-11-01T00:43:22.446446782Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 00:43:22.446476 env[1565]: time="2025-11-01T00:43:22.446462240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 00:43:22.446498 env[1565]: time="2025-11-01T00:43:22.446474893Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 00:43:22.446517 env[1565]: time="2025-11-01T00:43:22.446512237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 00:43:22.446534 env[1565]: time="2025-11-01T00:43:22.446522011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 00:43:22.446534 env[1565]: time="2025-11-01T00:43:22.446529074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 00:43:22.446565 env[1565]: time="2025-11-01T00:43:22.446535216Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 00:43:22.446565 env[1565]: time="2025-11-01T00:43:22.446542451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 00:43:22.446565 env[1565]: time="2025-11-01T00:43:22.446549383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 00:43:22.446565 env[1565]: time="2025-11-01T00:43:22.446555591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 00:43:22.446643 env[1565]: time="2025-11-01T00:43:22.446564711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 00:43:22.446643 env[1565]: time="2025-11-01T00:43:22.446575509Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 00:43:22.446643 env[1565]: time="2025-11-01T00:43:22.446638095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 00:43:22.446695 env[1565]: time="2025-11-01T00:43:22.446649784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 00:43:22.446695 env[1565]: time="2025-11-01T00:43:22.446659558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 00:43:22.446695 env[1565]: time="2025-11-01T00:43:22.446671535Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 00:43:22.446695 env[1565]: time="2025-11-01T00:43:22.446685956Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Nov 1 00:43:22.446780 env[1565]: time="2025-11-01T00:43:22.446697558Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 00:43:22.446780 env[1565]: time="2025-11-01T00:43:22.446710910Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Nov 1 00:43:22.446780 env[1565]: time="2025-11-01T00:43:22.446733733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 00:43:22.446855 bash[1589]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:43:22.446901 env[1565]: time="2025-11-01T00:43:22.446861436Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 00:43:22.448352 env[1565]: time="2025-11-01T00:43:22.446908902Z" level=info msg="Connect containerd service" Nov 1 00:43:22.448352 env[1565]: time="2025-11-01T00:43:22.446927310Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 00:43:22.448352 env[1565]: time="2025-11-01T00:43:22.447266462Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:43:22.448352 env[1565]: time="2025-11-01T00:43:22.447352418Z" level=info msg="Start subscribing containerd event" Nov 1 00:43:22.448352 env[1565]: time="2025-11-01T00:43:22.447389967Z" level=info msg="Start recovering state" Nov 1 00:43:22.448352 env[1565]: time="2025-11-01T00:43:22.447393918Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 00:43:22.448352 env[1565]: time="2025-11-01T00:43:22.447418795Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 00:43:22.448352 env[1565]: time="2025-11-01T00:43:22.447424782Z" level=info msg="Start event monitor" Nov 1 00:43:22.448352 env[1565]: time="2025-11-01T00:43:22.447438772Z" level=info msg="Start snapshots syncer" Nov 1 00:43:22.448352 env[1565]: time="2025-11-01T00:43:22.447444490Z" level=info msg="Start cni network conf syncer for default" Nov 1 00:43:22.448352 env[1565]: time="2025-11-01T00:43:22.447443551Z" level=info msg="containerd successfully booted in 0.035200s" Nov 1 00:43:22.448352 env[1565]: time="2025-11-01T00:43:22.447448785Z" level=info msg="Start streaming server" Nov 1 00:43:22.449641 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 00:43:22.449767 systemd[1]: Reached target user-config.target. Nov 1 00:43:22.459132 systemd[1]: Started containerd.service. Nov 1 00:43:22.465805 systemd[1]: Finished update-ssh-keys-after-ignition.service. Nov 1 00:43:22.494345 locksmithd[1600]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 00:43:22.671498 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 Nov 1 00:43:22.699609 extend-filesystems[1542]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required Nov 1 00:43:22.699609 extend-filesystems[1542]: old_desc_blocks = 1, new_desc_blocks = 56 Nov 1 00:43:22.699609 extend-filesystems[1542]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. Nov 1 00:43:22.725615 extend-filesystems[1530]: Resized filesystem in /dev/sdb9 Nov 1 00:43:22.700130 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 00:43:22.749652 tar[1562]: linux-amd64/README.md Nov 1 00:43:22.700223 systemd[1]: Finished extend-filesystems.service. Nov 1 00:43:22.725434 systemd[1]: Finished prepare-helm.service. Nov 1 00:43:23.006578 systemd-networkd[1324]: bond0: Gained IPv6LL Nov 1 00:43:23.006847 systemd-timesyncd[1506]: Network configuration changed, trying to establish connection. Nov 1 00:43:23.262779 systemd-timesyncd[1506]: Network configuration changed, trying to establish connection. Nov 1 00:43:23.262930 systemd-timesyncd[1506]: Network configuration changed, trying to establish connection. Nov 1 00:43:23.263801 systemd[1]: Finished systemd-networkd-wait-online.service. Nov 1 00:43:23.273756 systemd[1]: Reached target network-online.target. Nov 1 00:43:23.282599 systemd[1]: Starting kubelet.service... Nov 1 00:43:23.446981 sshd_keygen[1556]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 00:43:23.459239 systemd[1]: Finished sshd-keygen.service. Nov 1 00:43:23.466499 systemd[1]: Starting issuegen.service... Nov 1 00:43:23.474875 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 00:43:23.474971 systemd[1]: Finished issuegen.service. Nov 1 00:43:23.483501 systemd[1]: Starting systemd-user-sessions.service... Nov 1 00:43:23.492893 systemd[1]: Finished systemd-user-sessions.service. Nov 1 00:43:23.505445 systemd[1]: Started getty@tty1.service. Nov 1 00:43:23.515769 systemd[1]: Started serial-getty@ttyS1.service. Nov 1 00:43:23.525240 systemd[1]: Reached target getty.target. Nov 1 00:43:24.064468 systemd[1]: Started kubelet.service. Nov 1 00:43:24.530114 kubelet[1635]: E1101 00:43:24.530067 1635 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:43:24.531235 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:43:24.531309 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:43:25.560674 kernel: mlx5_core 0000:01:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Nov 1 00:43:28.255956 coreos-metadata[1525]: Nov 01 00:43:28.255 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Nov 1 00:43:28.256799 coreos-metadata[1522]: Nov 01 00:43:28.255 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Nov 1 00:43:28.655340 login[1628]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Nov 1 00:43:28.661581 login[1629]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 1 00:43:28.689821 systemd-logind[1557]: New session 1 of user core. Nov 1 00:43:28.692266 systemd[1]: Created slice user-500.slice. Nov 1 00:43:28.695067 systemd[1]: Starting user-runtime-dir@500.service... Nov 1 00:43:28.706136 systemd[1]: Finished user-runtime-dir@500.service. Nov 1 00:43:28.706884 systemd[1]: Starting user@500.service... Nov 1 00:43:28.708949 (systemd)[1654]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:28.784194 systemd[1654]: Queued start job for default target default.target. Nov 1 00:43:28.784444 systemd[1654]: Reached target paths.target. Nov 1 00:43:28.784456 systemd[1654]: Reached target sockets.target. Nov 1 00:43:28.784464 systemd[1654]: Reached target timers.target. Nov 1 00:43:28.784471 systemd[1654]: Reached target basic.target. Nov 1 00:43:28.784492 systemd[1654]: Reached target default.target. Nov 1 00:43:28.784511 systemd[1654]: Startup finished in 72ms. Nov 1 00:43:28.784551 systemd[1]: Started user@500.service. Nov 1 00:43:28.785123 systemd[1]: Started session-1.scope. Nov 1 00:43:29.189549 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:2 port 2:2 Nov 1 00:43:29.189720 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:1 port 2:2 Nov 1 00:43:29.256174 coreos-metadata[1522]: Nov 01 00:43:29.256 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Nov 1 00:43:29.256423 coreos-metadata[1525]: Nov 01 00:43:29.256 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Nov 1 00:43:29.656310 login[1628]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 1 00:43:29.667690 systemd-logind[1557]: New session 2 of user core. Nov 1 00:43:29.670265 systemd[1]: Started session-2.scope. Nov 1 00:43:30.141695 systemd[1]: Created slice system-sshd.slice. Nov 1 00:43:30.142455 systemd[1]: Started sshd@0-145.40.82.49:22-147.75.109.163:41096.service. Nov 1 00:43:30.213001 sshd[1675]: Accepted publickey for core from 147.75.109.163 port 41096 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 00:43:30.214103 sshd[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:30.217217 systemd-logind[1557]: New session 3 of user core. Nov 1 00:43:30.217961 systemd[1]: Started session-3.scope. Nov 1 00:43:30.271432 systemd[1]: Started sshd@1-145.40.82.49:22-147.75.109.163:51552.service. Nov 1 00:43:30.280815 coreos-metadata[1525]: Nov 01 00:43:30.280 INFO Fetch successful Nov 1 00:43:30.300866 sshd[1680]: Accepted publickey for core from 147.75.109.163 port 51552 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 00:43:30.301603 sshd[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:30.303905 systemd-logind[1557]: New session 4 of user core. Nov 1 00:43:30.304743 systemd[1]: Started session-4.scope. Nov 1 00:43:30.313791 systemd[1]: Finished coreos-metadata.service. Nov 1 00:43:30.314588 systemd[1]: Started packet-phone-home.service. Nov 1 00:43:30.315134 coreos-metadata[1522]: Nov 01 00:43:30.315 INFO Fetch successful Nov 1 00:43:30.319713 curl[1685]: % Total % Received % Xferd Average Speed Time Time Time Current Nov 1 00:43:30.319846 curl[1685]: Dload Upload Total Spent Left Speed Nov 1 00:43:30.348770 unknown[1522]: wrote ssh authorized keys file for user: core Nov 1 00:43:30.354157 sshd[1680]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:30.355693 systemd[1]: sshd@1-145.40.82.49:22-147.75.109.163:51552.service: Deactivated successfully. Nov 1 00:43:30.356039 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 00:43:30.356363 systemd-logind[1557]: Session 4 logged out. Waiting for processes to exit. Nov 1 00:43:30.356961 systemd[1]: Started sshd@2-145.40.82.49:22-147.75.109.163:51556.service. Nov 1 00:43:30.357349 systemd-logind[1557]: Removed session 4. Nov 1 00:43:30.361387 update-ssh-keys[1687]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:43:30.361685 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Nov 1 00:43:30.361846 systemd[1]: Reached target multi-user.target. Nov 1 00:43:30.362548 systemd[1]: Starting systemd-update-utmp-runlevel.service... Nov 1 00:43:30.366597 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Nov 1 00:43:30.366675 systemd[1]: Finished systemd-update-utmp-runlevel.service. Nov 1 00:43:30.371768 systemd[1]: Startup finished in 1.869s (kernel) + 24.850s (initrd) + 15.683s (userspace) = 42.402s. Nov 1 00:43:30.387076 sshd[1690]: Accepted publickey for core from 147.75.109.163 port 51556 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 00:43:30.387835 sshd[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:30.390384 systemd-logind[1557]: New session 5 of user core. Nov 1 00:43:30.390925 systemd[1]: Started session-5.scope. Nov 1 00:43:30.444901 sshd[1690]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:30.446228 systemd[1]: sshd@2-145.40.82.49:22-147.75.109.163:51556.service: Deactivated successfully. Nov 1 00:43:30.446646 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 00:43:30.447065 systemd-logind[1557]: Session 5 logged out. Waiting for processes to exit. Nov 1 00:43:30.447503 systemd-logind[1557]: Removed session 5. Nov 1 00:43:30.768597 curl[1685]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Nov 1 00:43:30.771140 systemd[1]: packet-phone-home.service: Deactivated successfully. Nov 1 00:43:34.664751 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 00:43:34.665272 systemd[1]: Stopped kubelet.service. Nov 1 00:43:34.668700 systemd[1]: Starting kubelet.service... Nov 1 00:43:34.881348 systemd[1]: Started kubelet.service. Nov 1 00:43:34.966727 kubelet[1700]: E1101 00:43:34.966525 1700 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:43:34.972227 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:43:34.972297 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:43:40.454934 systemd[1]: Started sshd@3-145.40.82.49:22-147.75.109.163:59224.service. Nov 1 00:43:40.485261 sshd[1719]: Accepted publickey for core from 147.75.109.163 port 59224 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 00:43:40.486201 sshd[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:40.489200 systemd-logind[1557]: New session 6 of user core. Nov 1 00:43:40.490108 systemd[1]: Started session-6.scope. Nov 1 00:43:40.545794 sshd[1719]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:40.547395 systemd[1]: sshd@3-145.40.82.49:22-147.75.109.163:59224.service: Deactivated successfully. Nov 1 00:43:40.547705 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 00:43:40.548067 systemd-logind[1557]: Session 6 logged out. Waiting for processes to exit. Nov 1 00:43:40.548555 systemd[1]: Started sshd@4-145.40.82.49:22-147.75.109.163:59238.service. Nov 1 00:43:40.548977 systemd-logind[1557]: Removed session 6. Nov 1 00:43:40.579894 sshd[1725]: Accepted publickey for core from 147.75.109.163 port 59238 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 00:43:40.581012 sshd[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:40.584615 systemd-logind[1557]: New session 7 of user core. Nov 1 00:43:40.585795 systemd[1]: Started session-7.scope. Nov 1 00:43:40.640138 sshd[1725]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:40.641928 systemd[1]: sshd@4-145.40.82.49:22-147.75.109.163:59238.service: Deactivated successfully. Nov 1 00:43:40.642261 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 00:43:40.642607 systemd-logind[1557]: Session 7 logged out. Waiting for processes to exit. Nov 1 00:43:40.643135 systemd[1]: Started sshd@5-145.40.82.49:22-147.75.109.163:59242.service. Nov 1 00:43:40.643503 systemd-logind[1557]: Removed session 7. Nov 1 00:43:40.673951 sshd[1731]: Accepted publickey for core from 147.75.109.163 port 59242 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 00:43:40.674961 sshd[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:40.678640 systemd-logind[1557]: New session 8 of user core. Nov 1 00:43:40.679781 systemd[1]: Started session-8.scope. Nov 1 00:43:40.735843 sshd[1731]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:40.737680 systemd[1]: sshd@5-145.40.82.49:22-147.75.109.163:59242.service: Deactivated successfully. Nov 1 00:43:40.738007 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 00:43:40.738352 systemd-logind[1557]: Session 8 logged out. Waiting for processes to exit. Nov 1 00:43:40.738951 systemd[1]: Started sshd@6-145.40.82.49:22-147.75.109.163:59256.service. Nov 1 00:43:40.739411 systemd-logind[1557]: Removed session 8. Nov 1 00:43:40.770117 sshd[1737]: Accepted publickey for core from 147.75.109.163 port 59256 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 00:43:40.771274 sshd[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:40.775330 systemd-logind[1557]: New session 9 of user core. Nov 1 00:43:40.776337 systemd[1]: Started session-9.scope. Nov 1 00:43:40.860651 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 1 00:43:40.861363 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 00:43:40.891051 dbus-daemon[1528]: \xd0\xddSP\xeeU: received setenforce notice (enforcing=805470592) Nov 1 00:43:40.896214 sudo[1740]: pam_unix(sudo:session): session closed for user root Nov 1 00:43:40.901957 sshd[1737]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:40.909268 systemd[1]: sshd@6-145.40.82.49:22-147.75.109.163:59256.service: Deactivated successfully. Nov 1 00:43:40.910936 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 00:43:40.912688 systemd-logind[1557]: Session 9 logged out. Waiting for processes to exit. Nov 1 00:43:40.915550 systemd[1]: Started sshd@7-145.40.82.49:22-147.75.109.163:59266.service. Nov 1 00:43:40.918092 systemd-logind[1557]: Removed session 9. Nov 1 00:43:40.973169 sshd[1744]: Accepted publickey for core from 147.75.109.163 port 59266 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 00:43:40.973896 sshd[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:40.976173 systemd-logind[1557]: New session 10 of user core. Nov 1 00:43:40.976629 systemd[1]: Started session-10.scope. Nov 1 00:43:41.030483 sudo[1748]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 1 00:43:41.030909 sudo[1748]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 00:43:41.036126 sudo[1748]: pam_unix(sudo:session): session closed for user root Nov 1 00:43:41.049240 sudo[1747]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 1 00:43:41.049945 sudo[1747]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 00:43:41.075375 systemd[1]: Stopping audit-rules.service... Nov 1 00:43:41.078000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Nov 1 00:43:41.078973 auditctl[1751]: No rules Nov 1 00:43:41.079763 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 00:43:41.080207 systemd[1]: Stopped audit-rules.service. Nov 1 00:43:41.084561 kernel: kauditd_printk_skb: 131 callbacks suppressed Nov 1 00:43:41.084714 kernel: audit: type=1305 audit(1761957821.078:184): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Nov 1 00:43:41.084418 systemd[1]: Starting audit-rules.service... Nov 1 00:43:41.078000 audit[1751]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe0aed4ad0 a2=420 a3=0 items=0 ppid=1 pid=1751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:41.124009 augenrules[1768]: No rules Nov 1 00:43:41.124735 systemd[1]: Finished audit-rules.service. Nov 1 00:43:41.125623 sudo[1747]: pam_unix(sudo:session): session closed for user root Nov 1 00:43:41.127162 sshd[1744]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:41.129822 systemd[1]: sshd@7-145.40.82.49:22-147.75.109.163:59266.service: Deactivated successfully. Nov 1 00:43:41.130367 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 00:43:41.130954 systemd-logind[1557]: Session 10 logged out. Waiting for processes to exit. Nov 1 00:43:41.131727 kernel: audit: type=1300 audit(1761957821.078:184): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe0aed4ad0 a2=420 a3=0 items=0 ppid=1 pid=1751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:41.131787 kernel: audit: type=1327 audit(1761957821.078:184): proctitle=2F7362696E2F617564697463746C002D44 Nov 1 00:43:41.078000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Nov 1 00:43:41.131908 systemd[1]: Started sshd@8-145.40.82.49:22-147.75.109.163:59282.service. Nov 1 00:43:41.132582 systemd-logind[1557]: Removed session 10. Nov 1 00:43:41.141272 kernel: audit: type=1131 audit(1761957821.079:185): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:41.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:41.163748 kernel: audit: type=1130 audit(1761957821.124:186): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:41.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:41.186194 kernel: audit: type=1106 audit(1761957821.125:187): pid=1747 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:43:41.125000 audit[1747]: USER_END pid=1747 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:43:41.188221 sshd[1774]: Accepted publickey for core from 147.75.109.163 port 59282 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 00:43:41.189816 sshd[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:41.192102 systemd-logind[1557]: New session 11 of user core. Nov 1 00:43:41.192555 systemd[1]: Started session-11.scope. Nov 1 00:43:41.212259 kernel: audit: type=1104 audit(1761957821.125:188): pid=1747 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:43:41.125000 audit[1747]: CRED_DISP pid=1747 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:43:41.235887 kernel: audit: type=1106 audit(1761957821.127:189): pid=1744 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:41.127000 audit[1744]: USER_END pid=1744 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:41.239721 sudo[1777]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 00:43:41.239846 sudo[1777]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 00:43:41.251808 systemd[1]: Starting docker.service... Nov 1 00:43:41.268195 kernel: audit: type=1104 audit(1761957821.127:190): pid=1744 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:41.127000 audit[1744]: CRED_DISP pid=1744 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:41.268312 env[1793]: time="2025-11-01T00:43:41.268220836Z" level=info msg="Starting up" Nov 1 00:43:41.268890 env[1793]: time="2025-11-01T00:43:41.268850427Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 00:43:41.268890 env[1793]: time="2025-11-01T00:43:41.268860424Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 00:43:41.268890 env[1793]: time="2025-11-01T00:43:41.268875242Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 00:43:41.268890 env[1793]: time="2025-11-01T00:43:41.268881506Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 00:43:41.269838 env[1793]: time="2025-11-01T00:43:41.269812165Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 00:43:41.269870 env[1793]: time="2025-11-01T00:43:41.269835840Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 00:43:41.269893 env[1793]: time="2025-11-01T00:43:41.269864108Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 00:43:41.269893 env[1793]: time="2025-11-01T00:43:41.269877423Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 00:43:41.294285 kernel: audit: type=1131 audit(1761957821.129:191): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-145.40.82.49:22-147.75.109.163:59266 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:41.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-145.40.82.49:22-147.75.109.163:59266 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:41.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-145.40.82.49:22-147.75.109.163:59282 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:41.187000 audit[1774]: USER_ACCT pid=1774 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:41.189000 audit[1774]: CRED_ACQ pid=1774 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:41.189000 audit[1774]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdf82f7ae0 a2=3 a3=0 items=0 ppid=1 pid=1774 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:41.189000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:43:41.194000 audit[1774]: USER_START pid=1774 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:41.194000 audit[1776]: CRED_ACQ pid=1776 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:41.239000 audit[1777]: USER_ACCT pid=1777 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:43:41.239000 audit[1777]: CRED_REFR pid=1777 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:43:41.240000 audit[1777]: USER_START pid=1777 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:43:41.327346 env[1793]: time="2025-11-01T00:43:41.327306943Z" level=info msg="Loading containers: start." Nov 1 00:43:41.365000 audit[1837]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1837 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:41.365000 audit[1837]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd10f077d0 a2=0 a3=7ffd10f077bc items=0 ppid=1793 pid=1837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:41.365000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Nov 1 00:43:41.366000 audit[1839]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1839 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:41.366000 audit[1839]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd8edc9530 a2=0 a3=7ffd8edc951c items=0 ppid=1793 pid=1839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:41.366000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Nov 1 00:43:41.367000 audit[1841]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1841 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:41.367000 audit[1841]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffcfa6a5620 a2=0 a3=7ffcfa6a560c items=0 ppid=1793 pid=1841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:41.367000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Nov 1 00:43:41.368000 audit[1843]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1843 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:41.368000 audit[1843]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffdc4c21580 a2=0 a3=7ffdc4c2156c items=0 ppid=1793 pid=1843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:41.368000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Nov 1 00:43:41.369000 audit[1845]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1845 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:41.369000 audit[1845]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffef64908d0 a2=0 a3=7ffef64908bc items=0 ppid=1793 pid=1845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:41.369000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Nov 1 00:43:41.397000 audit[1850]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1850 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:41.397000 audit[1850]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe8519bfd0 a2=0 a3=7ffe8519bfbc items=0 ppid=1793 pid=1850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:41.397000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Nov 1 00:43:41.402000 audit[1852]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1852 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:41.402000 audit[1852]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe25e5d7a0 a2=0 a3=7ffe25e5d78c items=0 ppid=1793 pid=1852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:41.402000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Nov 1 00:43:41.405000 audit[1854]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1854 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:41.405000 audit[1854]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fff3e018d70 a2=0 a3=7fff3e018d5c items=0 ppid=1793 pid=1854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:41.405000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Nov 1 00:43:41.407000 audit[1856]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1856 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:41.407000 audit[1856]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffef31542a0 a2=0 a3=7ffef315428c items=0 ppid=1793 pid=1856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:41.407000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Nov 1 00:43:41.417000 audit[1860]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1860 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:41.417000 audit[1860]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffc69d8b060 a2=0 a3=7ffc69d8b04c items=0 ppid=1793 pid=1860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:41.417000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Nov 1 00:43:41.436000 audit[1861]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1861 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:41.436000 audit[1861]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffdbad13840 a2=0 a3=7ffdbad1382c items=0 ppid=1793 pid=1861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:41.436000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Nov 1 00:43:41.466531 kernel: Initializing XFRM netlink socket Nov 1 00:43:41.530016 env[1793]: time="2025-11-01T00:43:41.529972393Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Nov 1 00:43:41.530815 systemd-timesyncd[1506]: Network configuration changed, trying to establish connection. Nov 1 00:43:41.543000 audit[1869]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1869 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:41.543000 audit[1869]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7fffb59cd120 a2=0 a3=7fffb59cd10c items=0 ppid=1793 pid=1869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:41.543000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Nov 1 00:43:41.571000 audit[1872]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1872 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:41.571000 audit[1872]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffd3be23010 a2=0 a3=7ffd3be22ffc items=0 ppid=1793 pid=1872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:41.571000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Nov 1 00:43:41.573000 audit[1875]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1875 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:41.573000 audit[1875]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff6906a5c0 a2=0 a3=7fff6906a5ac items=0 ppid=1793 pid=1875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:41.573000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Nov 1 00:43:41.575000 audit[1877]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1877 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:41.575000 audit[1877]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd9fee0230 a2=0 a3=7ffd9fee021c items=0 ppid=1793 pid=1877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:41.575000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Nov 1 00:43:41.576000 audit[1879]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1879 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:41.576000 audit[1879]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffe5e12c980 a2=0 a3=7ffe5e12c96c items=0 ppid=1793 pid=1879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:41.576000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Nov 1 00:43:41.578000 audit[1881]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1881 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:41.578000 audit[1881]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7fff4df2d880 a2=0 a3=7fff4df2d86c items=0 ppid=1793 pid=1881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:41.578000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Nov 1 00:43:41.579000 audit[1883]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1883 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:41.579000 audit[1883]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffcd83b9830 a2=0 a3=7ffcd83b981c items=0 ppid=1793 pid=1883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:41.579000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Nov 1 00:43:41.588000 audit[1886]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1886 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:41.588000 audit[1886]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffca2e565d0 a2=0 a3=7ffca2e565bc items=0 ppid=1793 pid=1886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:41.588000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Nov 1 00:43:41.590000 audit[1888]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1888 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:41.590000 audit[1888]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffea3a20d80 a2=0 a3=7ffea3a20d6c items=0 ppid=1793 pid=1888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:41.590000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Nov 1 00:43:41.592000 audit[1890]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1890 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:41.592000 audit[1890]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffc4bbbf7c0 a2=0 a3=7ffc4bbbf7ac items=0 ppid=1793 pid=1890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:41.592000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Nov 1 00:43:41.594000 audit[1892]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1892 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:41.594000 audit[1892]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc6b61fdc0 a2=0 a3=7ffc6b61fdac items=0 ppid=1793 pid=1892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:41.594000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Nov 1 00:43:41.595648 systemd-networkd[1324]: docker0: Link UP Nov 1 00:43:41.601000 audit[1896]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1896 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:41.601000 audit[1896]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe3b35bfd0 a2=0 a3=7ffe3b35bfbc items=0 ppid=1793 pid=1896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:41.601000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Nov 1 00:43:41.611000 audit[1897]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1897 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:41.611000 audit[1897]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffdba4248f0 a2=0 a3=7ffdba4248dc items=0 ppid=1793 pid=1897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:41.611000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Nov 1 00:43:41.612391 env[1793]: time="2025-11-01T00:43:41.612319412Z" level=info msg="Loading containers: done." Nov 1 00:43:41.631549 env[1793]: time="2025-11-01T00:43:41.631392642Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 00:43:41.632050 env[1793]: time="2025-11-01T00:43:41.631938002Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Nov 1 00:43:41.632333 env[1793]: time="2025-11-01T00:43:41.632247817Z" level=info msg="Daemon has completed initialization" Nov 1 00:43:41.635364 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2595719029-merged.mount: Deactivated successfully. Nov 1 00:43:41.674989 systemd[1]: Started docker.service. Nov 1 00:43:41.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:41.685435 env[1793]: time="2025-11-01T00:43:41.685302202Z" level=info msg="API listen on /run/docker.sock" Nov 1 00:43:41.737438 systemd-timesyncd[1506]: Contacted time server [2607:ff50:0:1a::20]:123 (2.flatcar.pool.ntp.org). Nov 1 00:43:41.737631 systemd-timesyncd[1506]: Initial clock synchronization to Sat 2025-11-01 00:43:41.555474 UTC. Nov 1 00:43:42.630244 env[1565]: time="2025-11-01T00:43:42.630220683Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 1 00:43:43.140713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3576550541.mount: Deactivated successfully. Nov 1 00:43:44.228641 env[1565]: time="2025-11-01T00:43:44.228583742Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:44.229163 env[1565]: time="2025-11-01T00:43:44.229151118Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:44.230243 env[1565]: time="2025-11-01T00:43:44.230231926Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:44.231373 env[1565]: time="2025-11-01T00:43:44.231358087Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:44.231900 env[1565]: time="2025-11-01T00:43:44.231886325Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 1 00:43:44.232284 env[1565]: time="2025-11-01T00:43:44.232271209Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 1 00:43:45.163566 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 00:43:45.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:45.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:45.163749 systemd[1]: Stopped kubelet.service. Nov 1 00:43:45.164658 systemd[1]: Starting kubelet.service... Nov 1 00:43:45.383336 systemd[1]: Started kubelet.service. Nov 1 00:43:45.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:45.415288 kubelet[1948]: E1101 00:43:45.415183 1948 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:43:45.416746 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:43:45.416816 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:43:45.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Nov 1 00:43:45.492316 env[1565]: time="2025-11-01T00:43:45.492262107Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:45.492884 env[1565]: time="2025-11-01T00:43:45.492845615Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:45.494265 env[1565]: time="2025-11-01T00:43:45.494222571Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:45.495127 env[1565]: time="2025-11-01T00:43:45.495087843Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:45.495595 env[1565]: time="2025-11-01T00:43:45.495551252Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 1 00:43:45.495967 env[1565]: time="2025-11-01T00:43:45.495900070Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 1 00:43:46.655018 env[1565]: time="2025-11-01T00:43:46.654968190Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:46.655644 env[1565]: time="2025-11-01T00:43:46.655580480Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:46.656779 env[1565]: time="2025-11-01T00:43:46.656709793Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:46.657863 env[1565]: time="2025-11-01T00:43:46.657828082Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:46.658418 env[1565]: time="2025-11-01T00:43:46.658378650Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 1 00:43:46.658777 env[1565]: time="2025-11-01T00:43:46.658731665Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 1 00:43:47.685402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3695432811.mount: Deactivated successfully. Nov 1 00:43:48.086175 env[1565]: time="2025-11-01T00:43:48.086075077Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:48.086683 env[1565]: time="2025-11-01T00:43:48.086648396Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:48.087148 env[1565]: time="2025-11-01T00:43:48.087118896Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:48.088083 env[1565]: time="2025-11-01T00:43:48.088049877Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:48.088252 env[1565]: time="2025-11-01T00:43:48.088202488Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 1 00:43:48.088503 env[1565]: time="2025-11-01T00:43:48.088487832Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 1 00:43:48.558840 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1562173918.mount: Deactivated successfully. Nov 1 00:43:49.298721 env[1565]: time="2025-11-01T00:43:49.298673989Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:49.299365 env[1565]: time="2025-11-01T00:43:49.299351903Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:49.300504 env[1565]: time="2025-11-01T00:43:49.300484974Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:49.301612 env[1565]: time="2025-11-01T00:43:49.301549386Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:49.302037 env[1565]: time="2025-11-01T00:43:49.302024560Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 1 00:43:49.302416 env[1565]: time="2025-11-01T00:43:49.302403463Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 1 00:43:49.733456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount32307430.mount: Deactivated successfully. Nov 1 00:43:49.746179 env[1565]: time="2025-11-01T00:43:49.746162065Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:49.746829 env[1565]: time="2025-11-01T00:43:49.746819233Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:49.747801 env[1565]: time="2025-11-01T00:43:49.747781461Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:49.748902 env[1565]: time="2025-11-01T00:43:49.748856974Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:49.749343 env[1565]: time="2025-11-01T00:43:49.749300245Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 1 00:43:49.749631 env[1565]: time="2025-11-01T00:43:49.749592662Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 1 00:43:50.215383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3545800612.mount: Deactivated successfully. Nov 1 00:43:51.814550 env[1565]: time="2025-11-01T00:43:51.814475291Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:51.815157 env[1565]: time="2025-11-01T00:43:51.815123530Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:51.816354 env[1565]: time="2025-11-01T00:43:51.816309103Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:51.818199 env[1565]: time="2025-11-01T00:43:51.818157871Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:51.818733 env[1565]: time="2025-11-01T00:43:51.818686274Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 1 00:43:54.034430 systemd[1]: Stopped kubelet.service. Nov 1 00:43:54.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:54.036917 systemd[1]: Starting kubelet.service... Nov 1 00:43:54.040060 kernel: kauditd_printk_skb: 88 callbacks suppressed Nov 1 00:43:54.040154 kernel: audit: type=1130 audit(1761957834.033:230): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:54.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:54.076370 systemd[1]: Reloading. Nov 1 00:43:54.100540 kernel: audit: type=1131 audit(1761957834.033:231): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:54.108825 /usr/lib/systemd/system-generators/torcx-generator[2034]: time="2025-11-01T00:43:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:43:54.108840 /usr/lib/systemd/system-generators/torcx-generator[2034]: time="2025-11-01T00:43:54Z" level=info msg="torcx already run" Nov 1 00:43:54.164240 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:43:54.164249 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:43:54.177085 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:43:54.226000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.226000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.336645 kernel: audit: type=1400 audit(1761957834.226:232): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.336725 kernel: audit: type=1400 audit(1761957834.226:233): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.336745 kernel: audit: type=1400 audit(1761957834.226:234): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.226000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.393253 kernel: audit: type=1400 audit(1761957834.226:235): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.226000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.226000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.512116 kernel: audit: type=1400 audit(1761957834.226:236): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.512147 kernel: audit: type=1400 audit(1761957834.226:237): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.226000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.572928 kernel: audit: type=1400 audit(1761957834.226:238): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.226000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.634971 kernel: audit: type=1400 audit(1761957834.226:239): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.226000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.226000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.336000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.336000 audit: BPF prog-id=40 op=LOAD Nov 1 00:43:54.336000 audit: BPF prog-id=35 op=UNLOAD Nov 1 00:43:54.336000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.336000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.336000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.336000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.336000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.336000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.336000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.336000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.510000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.510000 audit: BPF prog-id=41 op=LOAD Nov 1 00:43:54.510000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.510000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.510000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.510000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.510000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.510000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.510000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.510000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.697000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.697000 audit: BPF prog-id=42 op=LOAD Nov 1 00:43:54.697000 audit: BPF prog-id=36 op=UNLOAD Nov 1 00:43:54.697000 audit: BPF prog-id=37 op=UNLOAD Nov 1 00:43:54.698000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.698000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.698000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.698000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.698000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.698000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.698000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.698000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.698000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.698000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.698000 audit: BPF prog-id=43 op=LOAD Nov 1 00:43:54.698000 audit: BPF prog-id=38 op=UNLOAD Nov 1 00:43:54.699000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.699000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.699000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.699000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.699000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.699000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.699000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.699000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.699000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.699000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.699000 audit: BPF prog-id=44 op=LOAD Nov 1 00:43:54.699000 audit: BPF prog-id=24 op=UNLOAD Nov 1 00:43:54.699000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.699000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.699000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.699000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.699000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.699000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.699000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.699000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.699000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.699000 audit: BPF prog-id=45 op=LOAD Nov 1 00:43:54.699000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.699000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.699000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.699000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.699000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.699000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.699000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.699000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.699000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.699000 audit: BPF prog-id=46 op=LOAD Nov 1 00:43:54.699000 audit: BPF prog-id=25 op=UNLOAD Nov 1 00:43:54.699000 audit: BPF prog-id=26 op=UNLOAD Nov 1 00:43:54.699000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.699000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.699000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.699000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.699000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.699000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.699000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.699000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.699000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.700000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.700000 audit: BPF prog-id=47 op=LOAD Nov 1 00:43:54.700000 audit: BPF prog-id=27 op=UNLOAD Nov 1 00:43:54.700000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.700000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.700000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.700000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.700000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.700000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.700000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.700000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.700000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.700000 audit: BPF prog-id=48 op=LOAD Nov 1 00:43:54.700000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.700000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.700000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.700000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.700000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.700000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.700000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.700000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.700000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.700000 audit: BPF prog-id=49 op=LOAD Nov 1 00:43:54.700000 audit: BPF prog-id=28 op=UNLOAD Nov 1 00:43:54.700000 audit: BPF prog-id=29 op=UNLOAD Nov 1 00:43:54.701000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.701000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.701000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.701000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.701000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.701000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.701000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.701000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.701000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.701000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.701000 audit: BPF prog-id=50 op=LOAD Nov 1 00:43:54.701000 audit: BPF prog-id=34 op=UNLOAD Nov 1 00:43:54.701000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.701000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.701000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.701000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.701000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.701000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.701000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.701000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.701000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.701000 audit: BPF prog-id=51 op=LOAD Nov 1 00:43:54.701000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.701000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.701000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.701000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.701000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.701000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.701000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.701000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.701000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.701000 audit: BPF prog-id=52 op=LOAD Nov 1 00:43:54.701000 audit: BPF prog-id=30 op=UNLOAD Nov 1 00:43:54.701000 audit: BPF prog-id=31 op=UNLOAD Nov 1 00:43:54.702000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.702000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.702000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.702000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.702000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.702000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.702000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.702000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.702000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.702000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.702000 audit: BPF prog-id=53 op=LOAD Nov 1 00:43:54.702000 audit: BPF prog-id=32 op=UNLOAD Nov 1 00:43:54.703000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.703000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.703000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.703000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.703000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.703000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.703000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.703000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.703000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.703000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:54.703000 audit: BPF prog-id=54 op=LOAD Nov 1 00:43:54.703000 audit: BPF prog-id=33 op=UNLOAD Nov 1 00:43:54.709871 systemd[1]: Started kubelet.service. Nov 1 00:43:54.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:54.712224 systemd[1]: Stopping kubelet.service... Nov 1 00:43:54.712471 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:43:54.712596 systemd[1]: Stopped kubelet.service. Nov 1 00:43:54.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:54.713462 systemd[1]: Starting kubelet.service... Nov 1 00:43:54.902084 systemd[1]: Started kubelet.service. Nov 1 00:43:54.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:54.947873 kubelet[2106]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:43:54.947873 kubelet[2106]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:43:54.947873 kubelet[2106]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:43:54.948122 kubelet[2106]: I1101 00:43:54.947903 2106 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:43:55.259707 kubelet[2106]: I1101 00:43:55.259663 2106 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 00:43:55.259707 kubelet[2106]: I1101 00:43:55.259676 2106 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:43:55.259853 kubelet[2106]: I1101 00:43:55.259817 2106 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 00:43:55.276253 kubelet[2106]: E1101 00:43:55.276217 2106 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://145.40.82.49:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 145.40.82.49:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:43:55.276368 kubelet[2106]: I1101 00:43:55.276358 2106 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:43:55.282298 kubelet[2106]: E1101 00:43:55.282279 2106 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:43:55.282298 kubelet[2106]: I1101 00:43:55.282299 2106 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:43:55.302659 kubelet[2106]: I1101 00:43:55.302623 2106 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:43:55.303781 kubelet[2106]: I1101 00:43:55.303730 2106 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:43:55.303903 kubelet[2106]: I1101 00:43:55.303752 2106 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-3bc793b712","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:43:55.303903 kubelet[2106]: I1101 00:43:55.303879 2106 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:43:55.303903 kubelet[2106]: I1101 00:43:55.303886 2106 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 00:43:55.304059 kubelet[2106]: I1101 00:43:55.303969 2106 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:43:55.307635 kubelet[2106]: I1101 00:43:55.307597 2106 kubelet.go:446] "Attempting to sync node with API server" Nov 1 00:43:55.307635 kubelet[2106]: I1101 00:43:55.307613 2106 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:43:55.307635 kubelet[2106]: I1101 00:43:55.307626 2106 kubelet.go:352] "Adding apiserver pod source" Nov 1 00:43:55.307635 kubelet[2106]: I1101 00:43:55.307633 2106 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:43:55.317294 kubelet[2106]: I1101 00:43:55.317243 2106 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 00:43:55.317586 kubelet[2106]: I1101 00:43:55.317543 2106 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 00:43:55.337834 kubelet[2106]: W1101 00:43:55.337773 2106 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://145.40.82.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 145.40.82.49:6443: connect: connection refused Nov 1 00:43:55.337834 kubelet[2106]: E1101 00:43:55.337821 2106 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://145.40.82.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 145.40.82.49:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:43:55.338411 kubelet[2106]: W1101 00:43:55.338347 2106 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://145.40.82.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-3bc793b712&limit=500&resourceVersion=0": dial tcp 145.40.82.49:6443: connect: connection refused Nov 1 00:43:55.338411 kubelet[2106]: E1101 00:43:55.338387 2106 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://145.40.82.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-3bc793b712&limit=500&resourceVersion=0\": dial tcp 145.40.82.49:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:43:55.338739 kubelet[2106]: W1101 00:43:55.338701 2106 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 00:43:55.343231 kubelet[2106]: I1101 00:43:55.343192 2106 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:43:55.343231 kubelet[2106]: I1101 00:43:55.343221 2106 server.go:1287] "Started kubelet" Nov 1 00:43:55.343324 kubelet[2106]: I1101 00:43:55.343265 2106 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:43:55.354925 kubelet[2106]: I1101 00:43:55.354609 2106 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:43:55.353000 audit[2106]: AVC avc: denied { mac_admin } for pid=2106 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:55.353000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:43:55.353000 audit[2106]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0010bc0c0 a1=c0010b6468 a2=c0010bc090 a3=25 items=0 ppid=1 pid=2106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:55.353000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:43:55.353000 audit[2106]: AVC avc: denied { mac_admin } for pid=2106 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:55.353000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:43:55.353000 audit[2106]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0010a8200 a1=c0010b6480 a2=c0010bc150 a3=25 items=0 ppid=1 pid=2106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:55.353000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:43:55.355752 kubelet[2106]: I1101 00:43:55.355099 2106 kubelet.go:1507] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins_registry: invalid argument" Nov 1 00:43:55.355752 kubelet[2106]: I1101 00:43:55.355145 2106 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:43:55.355752 kubelet[2106]: I1101 00:43:55.355173 2106 kubelet.go:1511] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins: invalid argument" Nov 1 00:43:55.355752 kubelet[2106]: I1101 00:43:55.355279 2106 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:43:55.355752 kubelet[2106]: I1101 00:43:55.355505 2106 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:43:55.355752 kubelet[2106]: I1101 00:43:55.355586 2106 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:43:55.355752 kubelet[2106]: E1101 00:43:55.355594 2106 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-3bc793b712\" not found" Nov 1 00:43:55.355752 kubelet[2106]: I1101 00:43:55.355676 2106 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:43:55.356216 kubelet[2106]: I1101 00:43:55.355740 2106 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:43:55.356281 kubelet[2106]: E1101 00:43:55.356155 2106 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://145.40.82.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-3bc793b712?timeout=10s\": dial tcp 145.40.82.49:6443: connect: connection refused" interval="200ms" Nov 1 00:43:55.356345 kubelet[2106]: W1101 00:43:55.356244 2106 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://145.40.82.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 145.40.82.49:6443: connect: connection refused Nov 1 00:43:55.356423 kubelet[2106]: E1101 00:43:55.356322 2106 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://145.40.82.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 145.40.82.49:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:43:55.356551 kubelet[2106]: I1101 00:43:55.356514 2106 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:43:55.356631 kubelet[2106]: I1101 00:43:55.356599 2106 server.go:479] "Adding debug handlers to kubelet server" Nov 1 00:43:55.358796 kubelet[2106]: E1101 00:43:55.356832 2106 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://145.40.82.49:6443/api/v1/namespaces/default/events\": dial tcp 145.40.82.49:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-n-3bc793b712.1873bb50e4d90dfb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-3bc793b712,UID:ci-3510.3.8-n-3bc793b712,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-3bc793b712,},FirstTimestamp:2025-11-01 00:43:55.343203835 +0000 UTC m=+0.437599847,LastTimestamp:2025-11-01 00:43:55.343203835 +0000 UTC m=+0.437599847,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-3bc793b712,}" Nov 1 00:43:55.358796 kubelet[2106]: E1101 00:43:55.358782 2106 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:43:55.359437 kubelet[2106]: I1101 00:43:55.359416 2106 factory.go:221] Registration of the containerd container factory successfully Nov 1 00:43:55.359437 kubelet[2106]: I1101 00:43:55.359433 2106 factory.go:221] Registration of the systemd container factory successfully Nov 1 00:43:55.358000 audit[2132]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2132 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:55.358000 audit[2132]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd9b335ec0 a2=0 a3=7ffd9b335eac items=0 ppid=2106 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:55.358000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Nov 1 00:43:55.359000 audit[2133]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=2133 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:55.359000 audit[2133]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffda396d490 a2=0 a3=7ffda396d47c items=0 ppid=2106 pid=2133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:55.359000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Nov 1 00:43:55.361000 audit[2135]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=2135 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:55.361000 audit[2135]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdaa2a7b80 a2=0 a3=7ffdaa2a7b6c items=0 ppid=2106 pid=2135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:55.361000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Nov 1 00:43:55.363000 audit[2137]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=2137 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:55.363000 audit[2137]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff91f219f0 a2=0 a3=7fff91f219dc items=0 ppid=2106 pid=2137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:55.363000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Nov 1 00:43:55.373798 kubelet[2106]: I1101 00:43:55.373777 2106 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:43:55.373798 kubelet[2106]: I1101 00:43:55.373796 2106 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:43:55.373909 kubelet[2106]: I1101 00:43:55.373815 2106 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:43:55.379000 audit[2140]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2140 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:55.379000 audit[2140]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff9a089140 a2=0 a3=7fff9a08912c items=0 ppid=2106 pid=2140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:55.379000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Nov 1 00:43:55.381000 audit[2141]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=2141 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:43:55.381000 audit[2141]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffef16a0b20 a2=0 a3=7ffef16a0b0c items=0 ppid=2106 pid=2141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:55.381000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Nov 1 00:43:55.381000 audit[2142]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=2142 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:55.381000 audit[2142]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcd8d1d410 a2=0 a3=7ffcd8d1d3fc items=0 ppid=2106 pid=2142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:55.381000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Nov 1 00:43:55.382000 audit[2144]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=2144 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:55.382000 audit[2144]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffecdc283d0 a2=0 a3=7ffecdc283bc items=0 ppid=2106 pid=2144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:55.382000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Nov 1 00:43:55.382000 audit[2145]: NETFILTER_CFG table=mangle:34 family=10 entries=1 op=nft_register_chain pid=2145 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:43:55.382000 audit[2145]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdf5f0d560 a2=0 a3=7ffdf5f0d54c items=0 ppid=2106 pid=2145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:55.382000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Nov 1 00:43:55.383000 audit[2146]: NETFILTER_CFG table=filter:35 family=2 entries=1 op=nft_register_chain pid=2146 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:55.383000 audit[2146]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd93322170 a2=0 a3=7ffd9332215c items=0 ppid=2106 pid=2146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:55.383000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Nov 1 00:43:55.383000 audit[2147]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=2147 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:43:55.383000 audit[2147]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7fff7f50a010 a2=0 a3=7fff7f509ffc items=0 ppid=2106 pid=2147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:55.383000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Nov 1 00:43:55.384000 audit[2148]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=2148 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:43:55.384000 audit[2148]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc7c369450 a2=0 a3=7ffc7c36943c items=0 ppid=2106 pid=2148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:55.384000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Nov 1 00:43:55.386055 kubelet[2106]: I1101 00:43:55.381647 2106 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 00:43:55.386055 kubelet[2106]: I1101 00:43:55.382741 2106 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 00:43:55.386055 kubelet[2106]: I1101 00:43:55.382766 2106 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 00:43:55.386055 kubelet[2106]: I1101 00:43:55.382793 2106 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:43:55.386055 kubelet[2106]: I1101 00:43:55.382807 2106 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 00:43:55.386055 kubelet[2106]: E1101 00:43:55.382866 2106 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:43:55.386055 kubelet[2106]: W1101 00:43:55.383162 2106 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://145.40.82.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 145.40.82.49:6443: connect: connection refused Nov 1 00:43:55.386055 kubelet[2106]: E1101 00:43:55.383196 2106 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://145.40.82.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 145.40.82.49:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:43:55.386055 kubelet[2106]: I1101 00:43:55.385278 2106 policy_none.go:49] "None policy: Start" Nov 1 00:43:55.386055 kubelet[2106]: I1101 00:43:55.385304 2106 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:43:55.386055 kubelet[2106]: I1101 00:43:55.385317 2106 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:43:55.389774 systemd[1]: Created slice kubepods.slice. Nov 1 00:43:55.394989 systemd[1]: Created slice kubepods-burstable.slice. Nov 1 00:43:55.398409 systemd[1]: Created slice kubepods-besteffort.slice. Nov 1 00:43:55.412687 kubelet[2106]: I1101 00:43:55.412631 2106 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 00:43:55.411000 audit[2106]: AVC avc: denied { mac_admin } for pid=2106 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:55.411000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:43:55.411000 audit[2106]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0014401e0 a1=c0013b2c18 a2=c000f21fb0 a3=25 items=0 ppid=1 pid=2106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:55.411000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:43:55.413092 kubelet[2106]: I1101 00:43:55.412703 2106 server.go:94] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/device-plugins/: invalid argument" Nov 1 00:43:55.413092 kubelet[2106]: I1101 00:43:55.412839 2106 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:43:55.413092 kubelet[2106]: I1101 00:43:55.412856 2106 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:43:55.413092 kubelet[2106]: I1101 00:43:55.413054 2106 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:43:55.414106 kubelet[2106]: E1101 00:43:55.414038 2106 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:43:55.414106 kubelet[2106]: E1101 00:43:55.414104 2106 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.8-n-3bc793b712\" not found" Nov 1 00:43:55.505861 systemd[1]: Created slice kubepods-burstable-pod3619d1d26841327f0b5a8ab2e479624c.slice. Nov 1 00:43:55.516280 kubelet[2106]: I1101 00:43:55.516132 2106 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-3bc793b712" Nov 1 00:43:55.517000 kubelet[2106]: E1101 00:43:55.516894 2106 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://145.40.82.49:6443/api/v1/nodes\": dial tcp 145.40.82.49:6443: connect: connection refused" node="ci-3510.3.8-n-3bc793b712" Nov 1 00:43:55.538376 kubelet[2106]: E1101 00:43:55.538281 2106 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-3bc793b712\" not found" node="ci-3510.3.8-n-3bc793b712" Nov 1 00:43:55.545224 systemd[1]: Created slice kubepods-burstable-pod8676602d0f69db7ae88cb859f416708d.slice. Nov 1 00:43:55.557353 kubelet[2106]: E1101 00:43:55.557259 2106 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://145.40.82.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-3bc793b712?timeout=10s\": dial tcp 145.40.82.49:6443: connect: connection refused" interval="400ms" Nov 1 00:43:55.558269 kubelet[2106]: E1101 00:43:55.558199 2106 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-3bc793b712\" not found" node="ci-3510.3.8-n-3bc793b712" Nov 1 00:43:55.564368 systemd[1]: Created slice kubepods-burstable-pode211b67e53f2c38825be43c94f461cba.slice. Nov 1 00:43:55.568040 kubelet[2106]: E1101 00:43:55.567953 2106 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-3bc793b712\" not found" node="ci-3510.3.8-n-3bc793b712" Nov 1 00:43:55.656466 kubelet[2106]: I1101 00:43:55.656350 2106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8676602d0f69db7ae88cb859f416708d-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-3bc793b712\" (UID: \"8676602d0f69db7ae88cb859f416708d\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-3bc793b712" Nov 1 00:43:55.656466 kubelet[2106]: I1101 00:43:55.656465 2106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e211b67e53f2c38825be43c94f461cba-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-3bc793b712\" (UID: \"e211b67e53f2c38825be43c94f461cba\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-3bc793b712" Nov 1 00:43:55.656886 kubelet[2106]: I1101 00:43:55.656589 2106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8676602d0f69db7ae88cb859f416708d-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-3bc793b712\" (UID: \"8676602d0f69db7ae88cb859f416708d\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-3bc793b712" Nov 1 00:43:55.656886 kubelet[2106]: I1101 00:43:55.656651 2106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3619d1d26841327f0b5a8ab2e479624c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-3bc793b712\" (UID: \"3619d1d26841327f0b5a8ab2e479624c\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-3bc793b712" Nov 1 00:43:55.656886 kubelet[2106]: I1101 00:43:55.656745 2106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8676602d0f69db7ae88cb859f416708d-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-3bc793b712\" (UID: \"8676602d0f69db7ae88cb859f416708d\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-3bc793b712" Nov 1 00:43:55.656886 kubelet[2106]: I1101 00:43:55.656826 2106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8676602d0f69db7ae88cb859f416708d-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-3bc793b712\" (UID: \"8676602d0f69db7ae88cb859f416708d\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-3bc793b712" Nov 1 00:43:55.657250 kubelet[2106]: I1101 00:43:55.656885 2106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8676602d0f69db7ae88cb859f416708d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-3bc793b712\" (UID: \"8676602d0f69db7ae88cb859f416708d\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-3bc793b712" Nov 1 00:43:55.657250 kubelet[2106]: I1101 00:43:55.656935 2106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3619d1d26841327f0b5a8ab2e479624c-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-3bc793b712\" (UID: \"3619d1d26841327f0b5a8ab2e479624c\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-3bc793b712" Nov 1 00:43:55.657250 kubelet[2106]: I1101 00:43:55.656979 2106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3619d1d26841327f0b5a8ab2e479624c-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-3bc793b712\" (UID: \"3619d1d26841327f0b5a8ab2e479624c\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-3bc793b712" Nov 1 00:43:55.720918 kubelet[2106]: I1101 00:43:55.720830 2106 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-3bc793b712" Nov 1 00:43:55.721745 kubelet[2106]: E1101 00:43:55.721634 2106 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://145.40.82.49:6443/api/v1/nodes\": dial tcp 145.40.82.49:6443: connect: connection refused" node="ci-3510.3.8-n-3bc793b712" Nov 1 00:43:55.840434 env[1565]: time="2025-11-01T00:43:55.840187937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-3bc793b712,Uid:3619d1d26841327f0b5a8ab2e479624c,Namespace:kube-system,Attempt:0,}" Nov 1 00:43:55.861115 env[1565]: time="2025-11-01T00:43:55.861038021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-3bc793b712,Uid:8676602d0f69db7ae88cb859f416708d,Namespace:kube-system,Attempt:0,}" Nov 1 00:43:55.870389 env[1565]: time="2025-11-01T00:43:55.870312368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-3bc793b712,Uid:e211b67e53f2c38825be43c94f461cba,Namespace:kube-system,Attempt:0,}" Nov 1 00:43:55.958739 kubelet[2106]: E1101 00:43:55.958620 2106 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://145.40.82.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-3bc793b712?timeout=10s\": dial tcp 145.40.82.49:6443: connect: connection refused" interval="800ms" Nov 1 00:43:56.124416 kubelet[2106]: I1101 00:43:56.124376 2106 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-3bc793b712" Nov 1 00:43:56.124641 kubelet[2106]: E1101 00:43:56.124600 2106 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://145.40.82.49:6443/api/v1/nodes\": dial tcp 145.40.82.49:6443: connect: connection refused" node="ci-3510.3.8-n-3bc793b712" Nov 1 00:43:56.185323 kubelet[2106]: W1101 00:43:56.185167 2106 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://145.40.82.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 145.40.82.49:6443: connect: connection refused Nov 1 00:43:56.185323 kubelet[2106]: E1101 00:43:56.185313 2106 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://145.40.82.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 145.40.82.49:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:43:56.254372 kubelet[2106]: W1101 00:43:56.254217 2106 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://145.40.82.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 145.40.82.49:6443: connect: connection refused Nov 1 00:43:56.254372 kubelet[2106]: E1101 00:43:56.254363 2106 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://145.40.82.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 145.40.82.49:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:43:56.303735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount485548914.mount: Deactivated successfully. Nov 1 00:43:56.304968 env[1565]: time="2025-11-01T00:43:56.304920389Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:56.306045 env[1565]: time="2025-11-01T00:43:56.306002177Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:56.306515 env[1565]: time="2025-11-01T00:43:56.306448975Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:56.307266 env[1565]: time="2025-11-01T00:43:56.307219722Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:56.307707 env[1565]: time="2025-11-01T00:43:56.307668368Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:56.308909 env[1565]: time="2025-11-01T00:43:56.308869753Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:56.309282 env[1565]: time="2025-11-01T00:43:56.309241019Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:56.310908 env[1565]: time="2025-11-01T00:43:56.310864693Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:56.312125 env[1565]: time="2025-11-01T00:43:56.312085691Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:56.312962 env[1565]: time="2025-11-01T00:43:56.312923798Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:56.313387 env[1565]: time="2025-11-01T00:43:56.313348896Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:56.313800 env[1565]: time="2025-11-01T00:43:56.313760082Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:56.318192 env[1565]: time="2025-11-01T00:43:56.318162270Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:43:56.318192 env[1565]: time="2025-11-01T00:43:56.318182397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:43:56.318192 env[1565]: time="2025-11-01T00:43:56.318189703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:43:56.318287 env[1565]: time="2025-11-01T00:43:56.318256741Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9ae64ff16eed6df5d6345a9ea4beabaa7fc0956bacf6763a3d415894daadd924 pid=2157 runtime=io.containerd.runc.v2 Nov 1 00:43:56.320073 env[1565]: time="2025-11-01T00:43:56.320037197Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:43:56.320073 env[1565]: time="2025-11-01T00:43:56.320056878Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:43:56.320207 env[1565]: time="2025-11-01T00:43:56.320071763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:43:56.320207 env[1565]: time="2025-11-01T00:43:56.320141839Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec0b2500c73b1e27cd55de2648b232c7091b39f224b0d1a26812528f500d8953 pid=2177 runtime=io.containerd.runc.v2 Nov 1 00:43:56.320710 env[1565]: time="2025-11-01T00:43:56.320687970Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:43:56.320710 env[1565]: time="2025-11-01T00:43:56.320704117Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:43:56.320767 env[1565]: time="2025-11-01T00:43:56.320711026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:43:56.320794 env[1565]: time="2025-11-01T00:43:56.320772032Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/27351cda1efb26e8560cb8a5b34fc46b0e79238cfce9c209d84145cf4551cf3d pid=2186 runtime=io.containerd.runc.v2 Nov 1 00:43:56.325174 systemd[1]: Started cri-containerd-9ae64ff16eed6df5d6345a9ea4beabaa7fc0956bacf6763a3d415894daadd924.scope. Nov 1 00:43:56.327727 systemd[1]: Started cri-containerd-27351cda1efb26e8560cb8a5b34fc46b0e79238cfce9c209d84145cf4551cf3d.scope. Nov 1 00:43:56.328384 systemd[1]: Started cri-containerd-ec0b2500c73b1e27cd55de2648b232c7091b39f224b0d1a26812528f500d8953.scope. Nov 1 00:43:56.331000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.331000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.331000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.331000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.331000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.331000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.331000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.331000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.331000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.331000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.331000 audit: BPF prog-id=55 op=LOAD Nov 1 00:43:56.332000 audit[2174]: AVC avc: denied { bpf } for pid=2174 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.332000 audit[2174]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c0001bdc48 a2=10 a3=1c items=0 ppid=2157 pid=2174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:56.332000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961653634666631366565643664663564363334356139656134626561 Nov 1 00:43:56.332000 audit[2174]: AVC avc: denied { perfmon } for pid=2174 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.332000 audit[2174]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001bd6b0 a2=3c a3=c items=0 ppid=2157 pid=2174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:56.332000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961653634666631366565643664663564363334356139656134626561 Nov 1 00:43:56.332000 audit[2174]: AVC avc: denied { bpf } for pid=2174 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.332000 audit[2174]: AVC avc: denied { bpf } for pid=2174 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.332000 audit[2174]: AVC avc: denied { bpf } for pid=2174 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.332000 audit[2174]: AVC avc: denied { perfmon } for pid=2174 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.332000 audit[2174]: AVC avc: denied { perfmon } for pid=2174 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.332000 audit[2174]: AVC avc: denied { perfmon } for pid=2174 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.332000 audit[2174]: AVC avc: denied { perfmon } for pid=2174 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.332000 audit[2174]: AVC avc: denied { perfmon } for pid=2174 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.332000 audit[2174]: AVC avc: denied { bpf } for pid=2174 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.332000 audit[2174]: AVC avc: denied { bpf } for pid=2174 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.332000 audit: BPF prog-id=56 op=LOAD Nov 1 00:43:56.332000 audit[2174]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001bd9d8 a2=78 a3=c0000f9d90 items=0 ppid=2157 pid=2174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:56.332000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961653634666631366565643664663564363334356139656134626561 Nov 1 00:43:56.332000 audit[2174]: AVC avc: denied { bpf } for pid=2174 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.332000 audit[2174]: AVC avc: denied { bpf } for pid=2174 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.332000 audit[2174]: AVC avc: denied { perfmon } for pid=2174 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.332000 audit[2174]: AVC avc: denied { perfmon } for pid=2174 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.332000 audit[2174]: AVC avc: denied { perfmon } for pid=2174 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.332000 audit[2174]: AVC avc: denied { perfmon } for pid=2174 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.332000 audit[2174]: AVC avc: denied { perfmon } for pid=2174 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.332000 audit[2174]: AVC avc: denied { bpf } for pid=2174 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.332000 audit[2174]: AVC avc: denied { bpf } for pid=2174 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.332000 audit: BPF prog-id=57 op=LOAD Nov 1 00:43:56.332000 audit[2174]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001bd770 a2=78 a3=c0000f9dd8 items=0 ppid=2157 pid=2174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:56.332000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961653634666631366565643664663564363334356139656134626561 Nov 1 00:43:56.332000 audit: BPF prog-id=57 op=UNLOAD Nov 1 00:43:56.332000 audit: BPF prog-id=56 op=UNLOAD Nov 1 00:43:56.332000 audit[2174]: AVC avc: denied { bpf } for pid=2174 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.332000 audit[2174]: AVC avc: denied { bpf } for pid=2174 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.332000 audit[2174]: AVC avc: denied { bpf } for pid=2174 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.332000 audit[2174]: AVC avc: denied { perfmon } for pid=2174 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.332000 audit[2174]: AVC avc: denied { perfmon } for pid=2174 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.332000 audit[2174]: AVC avc: denied { perfmon } for pid=2174 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.332000 audit[2174]: AVC avc: denied { perfmon } for pid=2174 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.332000 audit[2174]: AVC avc: denied { perfmon } for pid=2174 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.332000 audit[2174]: AVC avc: denied { bpf } for pid=2174 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.332000 audit[2174]: AVC avc: denied { bpf } for pid=2174 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.332000 audit: BPF prog-id=58 op=LOAD Nov 1 00:43:56.332000 audit[2174]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001bdc30 a2=78 a3=c0003ea1e8 items=0 ppid=2157 pid=2174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:56.332000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961653634666631366565643664663564363334356139656134626561 Nov 1 00:43:56.332000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.332000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.332000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.332000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.332000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.332000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.332000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.332000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.332000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.332000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.332000 audit: BPF prog-id=59 op=LOAD Nov 1 00:43:56.332000 audit[2205]: AVC avc: denied { bpf } for pid=2205 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.332000 audit[2205]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000145c48 a2=10 a3=1c items=0 ppid=2186 pid=2205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:56.332000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237333531636461316566623236653835363063623861356233346663 Nov 1 00:43:56.332000 audit[2205]: AVC avc: denied { perfmon } for pid=2205 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.332000 audit[2205]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001456b0 a2=3c a3=c items=0 ppid=2186 pid=2205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:56.332000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237333531636461316566623236653835363063623861356233346663 Nov 1 00:43:56.333000 audit[2205]: AVC avc: denied { bpf } for pid=2205 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2205]: AVC avc: denied { bpf } for pid=2205 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2205]: AVC avc: denied { bpf } for pid=2205 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2205]: AVC avc: denied { perfmon } for pid=2205 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2205]: AVC avc: denied { perfmon } for pid=2205 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2205]: AVC avc: denied { perfmon } for pid=2205 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2205]: AVC avc: denied { perfmon } for pid=2205 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2205]: AVC avc: denied { perfmon } for pid=2205 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2205]: AVC avc: denied { bpf } for pid=2205 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2205]: AVC avc: denied { bpf } for pid=2205 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit: BPF prog-id=60 op=LOAD Nov 1 00:43:56.333000 audit[2205]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001459d8 a2=78 a3=c0002c3dc0 items=0 ppid=2186 pid=2205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:56.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237333531636461316566623236653835363063623861356233346663 Nov 1 00:43:56.333000 audit[2205]: AVC avc: denied { bpf } for pid=2205 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2205]: AVC avc: denied { bpf } for pid=2205 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2205]: AVC avc: denied { perfmon } for pid=2205 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2205]: AVC avc: denied { perfmon } for pid=2205 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2205]: AVC avc: denied { perfmon } for pid=2205 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2205]: AVC avc: denied { perfmon } for pid=2205 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2205]: AVC avc: denied { perfmon } for pid=2205 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2205]: AVC avc: denied { bpf } for pid=2205 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2205]: AVC avc: denied { bpf } for pid=2205 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit: BPF prog-id=61 op=LOAD Nov 1 00:43:56.333000 audit[2205]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000145770 a2=78 a3=c0002c3e08 items=0 ppid=2186 pid=2205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:56.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237333531636461316566623236653835363063623861356233346663 Nov 1 00:43:56.333000 audit: BPF prog-id=61 op=UNLOAD Nov 1 00:43:56.333000 audit: BPF prog-id=60 op=UNLOAD Nov 1 00:43:56.333000 audit[2205]: AVC avc: denied { bpf } for pid=2205 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2205]: AVC avc: denied { bpf } for pid=2205 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2205]: AVC avc: denied { bpf } for pid=2205 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2205]: AVC avc: denied { perfmon } for pid=2205 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2205]: AVC avc: denied { perfmon } for pid=2205 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2205]: AVC avc: denied { perfmon } for pid=2205 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2205]: AVC avc: denied { perfmon } for pid=2205 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2205]: AVC avc: denied { perfmon } for pid=2205 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2205]: AVC avc: denied { bpf } for pid=2205 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2205]: AVC avc: denied { bpf } for pid=2205 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit: BPF prog-id=62 op=LOAD Nov 1 00:43:56.333000 audit[2205]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000145c30 a2=78 a3=c0003b6218 items=0 ppid=2186 pid=2205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:56.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237333531636461316566623236653835363063623861356233346663 Nov 1 00:43:56.333000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit: BPF prog-id=63 op=LOAD Nov 1 00:43:56.333000 audit[2203]: AVC avc: denied { bpf } for pid=2203 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2203]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=2177 pid=2203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:56.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563306232353030633733623165323763643535646532363438623233 Nov 1 00:43:56.333000 audit[2203]: AVC avc: denied { perfmon } for pid=2203 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2203]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=2177 pid=2203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:56.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563306232353030633733623165323763643535646532363438623233 Nov 1 00:43:56.333000 audit[2203]: AVC avc: denied { bpf } for pid=2203 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2203]: AVC avc: denied { bpf } for pid=2203 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2203]: AVC avc: denied { bpf } for pid=2203 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2203]: AVC avc: denied { perfmon } for pid=2203 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2203]: AVC avc: denied { perfmon } for pid=2203 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2203]: AVC avc: denied { perfmon } for pid=2203 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2203]: AVC avc: denied { perfmon } for pid=2203 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2203]: AVC avc: denied { perfmon } for pid=2203 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2203]: AVC avc: denied { bpf } for pid=2203 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2203]: AVC avc: denied { bpf } for pid=2203 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit: BPF prog-id=64 op=LOAD Nov 1 00:43:56.333000 audit[2203]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c000098bd0 items=0 ppid=2177 pid=2203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:56.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563306232353030633733623165323763643535646532363438623233 Nov 1 00:43:56.333000 audit[2203]: AVC avc: denied { bpf } for pid=2203 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2203]: AVC avc: denied { bpf } for pid=2203 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2203]: AVC avc: denied { perfmon } for pid=2203 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2203]: AVC avc: denied { perfmon } for pid=2203 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2203]: AVC avc: denied { perfmon } for pid=2203 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2203]: AVC avc: denied { perfmon } for pid=2203 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2203]: AVC avc: denied { perfmon } for pid=2203 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2203]: AVC avc: denied { bpf } for pid=2203 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2203]: AVC avc: denied { bpf } for pid=2203 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit: BPF prog-id=65 op=LOAD Nov 1 00:43:56.333000 audit[2203]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c000098c18 items=0 ppid=2177 pid=2203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:56.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563306232353030633733623165323763643535646532363438623233 Nov 1 00:43:56.333000 audit: BPF prog-id=65 op=UNLOAD Nov 1 00:43:56.333000 audit: BPF prog-id=64 op=UNLOAD Nov 1 00:43:56.333000 audit[2203]: AVC avc: denied { bpf } for pid=2203 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2203]: AVC avc: denied { bpf } for pid=2203 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2203]: AVC avc: denied { bpf } for pid=2203 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2203]: AVC avc: denied { perfmon } for pid=2203 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2203]: AVC avc: denied { perfmon } for pid=2203 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2203]: AVC avc: denied { perfmon } for pid=2203 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2203]: AVC avc: denied { perfmon } for pid=2203 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2203]: AVC avc: denied { perfmon } for pid=2203 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2203]: AVC avc: denied { bpf } for pid=2203 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit[2203]: AVC avc: denied { bpf } for pid=2203 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.333000 audit: BPF prog-id=66 op=LOAD Nov 1 00:43:56.333000 audit[2203]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c000099028 items=0 ppid=2177 pid=2203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:56.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563306232353030633733623165323763643535646532363438623233 Nov 1 00:43:56.349347 env[1565]: time="2025-11-01T00:43:56.349300351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-3bc793b712,Uid:3619d1d26841327f0b5a8ab2e479624c,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ae64ff16eed6df5d6345a9ea4beabaa7fc0956bacf6763a3d415894daadd924\"" Nov 1 00:43:56.351161 env[1565]: time="2025-11-01T00:43:56.351137895Z" level=info msg="CreateContainer within sandbox \"9ae64ff16eed6df5d6345a9ea4beabaa7fc0956bacf6763a3d415894daadd924\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 00:43:56.351240 env[1565]: time="2025-11-01T00:43:56.351143022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-3bc793b712,Uid:8676602d0f69db7ae88cb859f416708d,Namespace:kube-system,Attempt:0,} returns sandbox id \"27351cda1efb26e8560cb8a5b34fc46b0e79238cfce9c209d84145cf4551cf3d\"" Nov 1 00:43:56.352107 env[1565]: time="2025-11-01T00:43:56.352092805Z" level=info msg="CreateContainer within sandbox \"27351cda1efb26e8560cb8a5b34fc46b0e79238cfce9c209d84145cf4551cf3d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 00:43:56.353304 env[1565]: time="2025-11-01T00:43:56.353287259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-3bc793b712,Uid:e211b67e53f2c38825be43c94f461cba,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec0b2500c73b1e27cd55de2648b232c7091b39f224b0d1a26812528f500d8953\"" Nov 1 00:43:56.354249 env[1565]: time="2025-11-01T00:43:56.354234497Z" level=info msg="CreateContainer within sandbox \"ec0b2500c73b1e27cd55de2648b232c7091b39f224b0d1a26812528f500d8953\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 00:43:56.358039 env[1565]: time="2025-11-01T00:43:56.358000559Z" level=info msg="CreateContainer within sandbox \"27351cda1efb26e8560cb8a5b34fc46b0e79238cfce9c209d84145cf4551cf3d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"24a3619d40c2e3de8eba001c401a278b28d887a04abb8df3eb9dfe6d6b3fbf45\"" Nov 1 00:43:56.358283 env[1565]: time="2025-11-01T00:43:56.358247092Z" level=info msg="StartContainer for \"24a3619d40c2e3de8eba001c401a278b28d887a04abb8df3eb9dfe6d6b3fbf45\"" Nov 1 00:43:56.359410 env[1565]: time="2025-11-01T00:43:56.359390213Z" level=info msg="CreateContainer within sandbox \"9ae64ff16eed6df5d6345a9ea4beabaa7fc0956bacf6763a3d415894daadd924\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4865c0db64e69016cd8f59571bcfa5a164a0f6e37ef842f750d93425d243f62c\"" Nov 1 00:43:56.359573 env[1565]: time="2025-11-01T00:43:56.359557503Z" level=info msg="StartContainer for \"4865c0db64e69016cd8f59571bcfa5a164a0f6e37ef842f750d93425d243f62c\"" Nov 1 00:43:56.360128 env[1565]: time="2025-11-01T00:43:56.360107720Z" level=info msg="CreateContainer within sandbox \"ec0b2500c73b1e27cd55de2648b232c7091b39f224b0d1a26812528f500d8953\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5e6a934337c69d9a052349be9f3be138d7f51044cea74e8fa41cbd777cc3fe79\"" Nov 1 00:43:56.360265 env[1565]: time="2025-11-01T00:43:56.360251773Z" level=info msg="StartContainer for \"5e6a934337c69d9a052349be9f3be138d7f51044cea74e8fa41cbd777cc3fe79\"" Nov 1 00:43:56.366118 systemd[1]: Started cri-containerd-24a3619d40c2e3de8eba001c401a278b28d887a04abb8df3eb9dfe6d6b3fbf45.scope. Nov 1 00:43:56.368449 systemd[1]: Started cri-containerd-4865c0db64e69016cd8f59571bcfa5a164a0f6e37ef842f750d93425d243f62c.scope. Nov 1 00:43:56.369039 systemd[1]: Started cri-containerd-5e6a934337c69d9a052349be9f3be138d7f51044cea74e8fa41cbd777cc3fe79.scope. Nov 1 00:43:56.373000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.373000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.373000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.373000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.373000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.373000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.373000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.373000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.373000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.373000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.373000 audit: BPF prog-id=67 op=LOAD Nov 1 00:43:56.373000 audit[2279]: AVC avc: denied { bpf } for pid=2279 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.373000 audit[2279]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000195c48 a2=10 a3=1c items=0 ppid=2186 pid=2279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:56.373000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234613336313964343063326533646538656261303031633430316132 Nov 1 00:43:56.373000 audit[2279]: AVC avc: denied { perfmon } for pid=2279 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.373000 audit[2279]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001956b0 a2=3c a3=8 items=0 ppid=2186 pid=2279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:56.373000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234613336313964343063326533646538656261303031633430316132 Nov 1 00:43:56.373000 audit[2279]: AVC avc: denied { bpf } for pid=2279 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.373000 audit[2279]: AVC avc: denied { bpf } for pid=2279 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.373000 audit[2279]: AVC avc: denied { bpf } for pid=2279 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.373000 audit[2279]: AVC avc: denied { perfmon } for pid=2279 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.373000 audit[2279]: AVC avc: denied { perfmon } for pid=2279 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.373000 audit[2279]: AVC avc: denied { perfmon } for pid=2279 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.373000 audit[2279]: AVC avc: denied { perfmon } for pid=2279 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.373000 audit[2279]: AVC avc: denied { perfmon } for pid=2279 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.373000 audit[2279]: AVC avc: denied { bpf } for pid=2279 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.373000 audit[2279]: AVC avc: denied { bpf } for pid=2279 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.373000 audit: BPF prog-id=68 op=LOAD Nov 1 00:43:56.373000 audit[2279]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001959d8 a2=78 a3=c0003070a0 items=0 ppid=2186 pid=2279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:56.373000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234613336313964343063326533646538656261303031633430316132 Nov 1 00:43:56.373000 audit[2279]: AVC avc: denied { bpf } for pid=2279 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.373000 audit[2279]: AVC avc: denied { bpf } for pid=2279 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.373000 audit[2279]: AVC avc: denied { perfmon } for pid=2279 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.373000 audit[2279]: AVC avc: denied { perfmon } for pid=2279 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.373000 audit[2279]: AVC avc: denied { perfmon } for pid=2279 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.373000 audit[2279]: AVC avc: denied { perfmon } for pid=2279 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.373000 audit[2279]: AVC avc: denied { perfmon } for pid=2279 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.373000 audit[2279]: AVC avc: denied { bpf } for pid=2279 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.373000 audit[2279]: AVC avc: denied { bpf } for pid=2279 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.373000 audit: BPF prog-id=69 op=LOAD Nov 1 00:43:56.373000 audit[2279]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000195770 a2=78 a3=c0003070e8 items=0 ppid=2186 pid=2279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:56.373000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234613336313964343063326533646538656261303031633430316132 Nov 1 00:43:56.373000 audit: BPF prog-id=69 op=UNLOAD Nov 1 00:43:56.373000 audit: BPF prog-id=68 op=UNLOAD Nov 1 00:43:56.373000 audit[2279]: AVC avc: denied { bpf } for pid=2279 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.373000 audit[2279]: AVC avc: denied { bpf } for pid=2279 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.373000 audit[2279]: AVC avc: denied { bpf } for pid=2279 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.373000 audit[2279]: AVC avc: denied { perfmon } for pid=2279 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.373000 audit[2279]: AVC avc: denied { perfmon } for pid=2279 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.373000 audit[2279]: AVC avc: denied { perfmon } for pid=2279 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.373000 audit[2279]: AVC avc: denied { perfmon } for pid=2279 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.373000 audit[2279]: AVC avc: denied { perfmon } for pid=2279 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.373000 audit[2279]: AVC avc: denied { bpf } for pid=2279 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.373000 audit[2279]: AVC avc: denied { bpf } for pid=2279 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.373000 audit: BPF prog-id=70 op=LOAD Nov 1 00:43:56.373000 audit[2279]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000195c30 a2=78 a3=c0003074f8 items=0 ppid=2186 pid=2279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:56.373000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234613336313964343063326533646538656261303031633430316132 Nov 1 00:43:56.374000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.374000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.374000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.374000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.374000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.374000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.374000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.374000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.374000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.374000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.374000 audit: BPF prog-id=71 op=LOAD Nov 1 00:43:56.374000 audit[2295]: AVC avc: denied { bpf } for pid=2295 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.374000 audit[2295]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c0001bdc48 a2=10 a3=1c items=0 ppid=2157 pid=2295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:56.374000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438363563306462363465363930313663643866353935373162636661 Nov 1 00:43:56.375000 audit[2295]: AVC avc: denied { perfmon } for pid=2295 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.375000 audit[2295]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001bd6b0 a2=3c a3=8 items=0 ppid=2157 pid=2295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:56.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438363563306462363465363930313663643866353935373162636661 Nov 1 00:43:56.375000 audit[2295]: AVC avc: denied { bpf } for pid=2295 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.375000 audit[2295]: AVC avc: denied { bpf } for pid=2295 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.375000 audit[2295]: AVC avc: denied { bpf } for pid=2295 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.375000 audit[2295]: AVC avc: denied { perfmon } for pid=2295 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.375000 audit[2295]: AVC avc: denied { perfmon } for pid=2295 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.375000 audit[2295]: AVC avc: denied { perfmon } for pid=2295 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.375000 audit[2295]: AVC avc: denied { perfmon } for pid=2295 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.375000 audit[2295]: AVC avc: denied { perfmon } for pid=2295 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.375000 audit[2295]: AVC avc: denied { bpf } for pid=2295 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.375000 audit[2295]: AVC avc: denied { bpf } for pid=2295 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.375000 audit: BPF prog-id=72 op=LOAD Nov 1 00:43:56.375000 audit[2295]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001bd9d8 a2=78 a3=c000383dc0 items=0 ppid=2157 pid=2295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:56.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438363563306462363465363930313663643866353935373162636661 Nov 1 00:43:56.375000 audit[2295]: AVC avc: denied { bpf } for pid=2295 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.375000 audit[2295]: AVC avc: denied { bpf } for pid=2295 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.375000 audit[2295]: AVC avc: denied { perfmon } for pid=2295 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.375000 audit[2295]: AVC avc: denied { perfmon } for pid=2295 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.375000 audit[2295]: AVC avc: denied { perfmon } for pid=2295 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.375000 audit[2295]: AVC avc: denied { perfmon } for pid=2295 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.375000 audit[2295]: AVC avc: denied { perfmon } for pid=2295 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.375000 audit[2295]: AVC avc: denied { bpf } for pid=2295 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.375000 audit[2295]: AVC avc: denied { bpf } for pid=2295 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.375000 audit: BPF prog-id=73 op=LOAD Nov 1 00:43:56.375000 audit[2295]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001bd770 a2=78 a3=c000383e08 items=0 ppid=2157 pid=2295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:56.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438363563306462363465363930313663643866353935373162636661 Nov 1 00:43:56.375000 audit: BPF prog-id=73 op=UNLOAD Nov 1 00:43:56.375000 audit: BPF prog-id=72 op=UNLOAD Nov 1 00:43:56.375000 audit[2295]: AVC avc: denied { bpf } for pid=2295 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.375000 audit[2295]: AVC avc: denied { bpf } for pid=2295 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.375000 audit[2295]: AVC avc: denied { bpf } for pid=2295 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.375000 audit[2295]: AVC avc: denied { perfmon } for pid=2295 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.375000 audit[2295]: AVC avc: denied { perfmon } for pid=2295 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.375000 audit[2295]: AVC avc: denied { perfmon } for pid=2295 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.375000 audit[2295]: AVC avc: denied { perfmon } for pid=2295 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.375000 audit[2295]: AVC avc: denied { perfmon } for pid=2295 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.375000 audit[2295]: AVC avc: denied { bpf } for pid=2295 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.375000 audit[2295]: AVC avc: denied { bpf } for pid=2295 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.375000 audit: BPF prog-id=74 op=LOAD Nov 1 00:43:56.375000 audit[2295]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001bdc30 a2=78 a3=c0003f6218 items=0 ppid=2157 pid=2295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:56.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438363563306462363465363930313663643866353935373162636661 Nov 1 00:43:56.375000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.375000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.375000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.375000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.375000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.375000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.375000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.375000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.375000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.375000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.375000 audit: BPF prog-id=75 op=LOAD Nov 1 00:43:56.376000 audit[2296]: AVC avc: denied { bpf } for pid=2296 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.376000 audit[2296]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=2177 pid=2296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:56.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565366139333433333763363964396130353233343962653966336265 Nov 1 00:43:56.376000 audit[2296]: AVC avc: denied { perfmon } for pid=2296 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.376000 audit[2296]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=8 items=0 ppid=2177 pid=2296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:56.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565366139333433333763363964396130353233343962653966336265 Nov 1 00:43:56.376000 audit[2296]: AVC avc: denied { bpf } for pid=2296 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.376000 audit[2296]: AVC avc: denied { bpf } for pid=2296 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.376000 audit[2296]: AVC avc: denied { bpf } for pid=2296 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.376000 audit[2296]: AVC avc: denied { perfmon } for pid=2296 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.376000 audit[2296]: AVC avc: denied { perfmon } for pid=2296 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.376000 audit[2296]: AVC avc: denied { perfmon } for pid=2296 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.376000 audit[2296]: AVC avc: denied { perfmon } for pid=2296 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.376000 audit[2296]: AVC avc: denied { perfmon } for pid=2296 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.376000 audit[2296]: AVC avc: denied { bpf } for pid=2296 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.376000 audit[2296]: AVC avc: denied { bpf } for pid=2296 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.376000 audit: BPF prog-id=76 op=LOAD Nov 1 00:43:56.376000 audit[2296]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c0002a9c30 items=0 ppid=2177 pid=2296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:56.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565366139333433333763363964396130353233343962653966336265 Nov 1 00:43:56.376000 audit[2296]: AVC avc: denied { bpf } for pid=2296 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.376000 audit[2296]: AVC avc: denied { bpf } for pid=2296 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.376000 audit[2296]: AVC avc: denied { perfmon } for pid=2296 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.376000 audit[2296]: AVC avc: denied { perfmon } for pid=2296 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.376000 audit[2296]: AVC avc: denied { perfmon } for pid=2296 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.376000 audit[2296]: AVC avc: denied { perfmon } for pid=2296 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.376000 audit[2296]: AVC avc: denied { perfmon } for pid=2296 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.376000 audit[2296]: AVC avc: denied { bpf } for pid=2296 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.376000 audit[2296]: AVC avc: denied { bpf } for pid=2296 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.376000 audit: BPF prog-id=77 op=LOAD Nov 1 00:43:56.376000 audit[2296]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c0002a9c78 items=0 ppid=2177 pid=2296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:56.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565366139333433333763363964396130353233343962653966336265 Nov 1 00:43:56.376000 audit: BPF prog-id=77 op=UNLOAD Nov 1 00:43:56.376000 audit: BPF prog-id=76 op=UNLOAD Nov 1 00:43:56.376000 audit[2296]: AVC avc: denied { bpf } for pid=2296 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.376000 audit[2296]: AVC avc: denied { bpf } for pid=2296 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.376000 audit[2296]: AVC avc: denied { bpf } for pid=2296 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.376000 audit[2296]: AVC avc: denied { perfmon } for pid=2296 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.376000 audit[2296]: AVC avc: denied { perfmon } for pid=2296 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.376000 audit[2296]: AVC avc: denied { perfmon } for pid=2296 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.376000 audit[2296]: AVC avc: denied { perfmon } for pid=2296 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.376000 audit[2296]: AVC avc: denied { perfmon } for pid=2296 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.376000 audit[2296]: AVC avc: denied { bpf } for pid=2296 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.376000 audit[2296]: AVC avc: denied { bpf } for pid=2296 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:56.376000 audit: BPF prog-id=78 op=LOAD Nov 1 00:43:56.376000 audit[2296]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c000392088 items=0 ppid=2177 pid=2296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:56.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565366139333433333763363964396130353233343962653966336265 Nov 1 00:43:56.393382 env[1565]: time="2025-11-01T00:43:56.393359175Z" level=info msg="StartContainer for \"4865c0db64e69016cd8f59571bcfa5a164a0f6e37ef842f750d93425d243f62c\" returns successfully" Nov 1 00:43:56.393528 env[1565]: time="2025-11-01T00:43:56.393513604Z" level=info msg="StartContainer for \"24a3619d40c2e3de8eba001c401a278b28d887a04abb8df3eb9dfe6d6b3fbf45\" returns successfully" Nov 1 00:43:56.394521 env[1565]: time="2025-11-01T00:43:56.394502946Z" level=info msg="StartContainer for \"5e6a934337c69d9a052349be9f3be138d7f51044cea74e8fa41cbd777cc3fe79\" returns successfully" Nov 1 00:43:56.860000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sdb9" ino=520987 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:43:56.860000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6 a1=c000aa8300 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:43:56.860000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:43:56.860000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:43:56.860000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=7 a1=c0003e82e0 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:43:56.860000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:43:56.926795 kubelet[2106]: I1101 00:43:56.926778 2106 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-3bc793b712" Nov 1 00:43:56.945000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:43:56.945000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=3f a1=c005c345a0 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:43:56.945000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:43:56.946000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="sdb9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:43:56.946000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=3f a1=c00590e030 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:43:56.946000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:43:56.950000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sdb9" ino=520987 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:43:56.950000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=40 a1=c0062ce090 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:43:56.950000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:43:56.950000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="sdb9" ino=520989 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:43:56.950000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=47 a1=c0039a8360 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:43:56.950000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:43:56.950000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:43:56.950000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=5b a1=c0061d6040 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:43:56.950000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:43:56.950000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sdb9" ino=520987 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:43:56.950000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=5e a1=c0043a1d10 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:43:56.950000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:43:56.965232 kubelet[2106]: E1101 00:43:56.965176 2106 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.8-n-3bc793b712\" not found" node="ci-3510.3.8-n-3bc793b712" Nov 1 00:43:57.069273 kubelet[2106]: I1101 00:43:57.069197 2106 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-n-3bc793b712" Nov 1 00:43:57.069273 kubelet[2106]: E1101 00:43:57.069247 2106 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-3510.3.8-n-3bc793b712\": node \"ci-3510.3.8-n-3bc793b712\" not found" Nov 1 00:43:57.156849 kubelet[2106]: I1101 00:43:57.156628 2106 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-3bc793b712" Nov 1 00:43:57.167648 kubelet[2106]: E1101 00:43:57.167540 2106 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-n-3bc793b712\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-3bc793b712" Nov 1 00:43:57.167648 kubelet[2106]: I1101 00:43:57.167593 2106 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-3bc793b712" Nov 1 00:43:57.170946 kubelet[2106]: E1101 00:43:57.170852 2106 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-3bc793b712\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.8-n-3bc793b712" Nov 1 00:43:57.170946 kubelet[2106]: I1101 00:43:57.170909 2106 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-3bc793b712" Nov 1 00:43:57.174659 kubelet[2106]: E1101 00:43:57.174554 2106 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-3bc793b712\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.8-n-3bc793b712" Nov 1 00:43:57.309064 kubelet[2106]: I1101 00:43:57.308978 2106 apiserver.go:52] "Watching apiserver" Nov 1 00:43:57.355931 kubelet[2106]: I1101 00:43:57.355842 2106 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:43:57.391017 kubelet[2106]: I1101 00:43:57.390964 2106 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-3bc793b712" Nov 1 00:43:57.393769 kubelet[2106]: I1101 00:43:57.393722 2106 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-3bc793b712" Nov 1 00:43:57.394889 kubelet[2106]: E1101 00:43:57.394827 2106 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-3bc793b712\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.8-n-3bc793b712" Nov 1 00:43:57.396470 kubelet[2106]: I1101 00:43:57.396425 2106 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-3bc793b712" Nov 1 00:43:57.397874 kubelet[2106]: E1101 00:43:57.397808 2106 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-n-3bc793b712\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-3bc793b712" Nov 1 00:43:57.400164 kubelet[2106]: E1101 00:43:57.400081 2106 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-3bc793b712\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.8-n-3bc793b712" Nov 1 00:43:58.399735 kubelet[2106]: I1101 00:43:58.399650 2106 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-3bc793b712" Nov 1 00:43:58.400889 kubelet[2106]: I1101 00:43:58.399948 2106 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-3bc793b712" Nov 1 00:43:58.400889 kubelet[2106]: I1101 00:43:58.400042 2106 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-3bc793b712" Nov 1 00:43:58.409572 kubelet[2106]: W1101 00:43:58.409484 2106 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 00:43:58.409882 kubelet[2106]: W1101 00:43:58.409734 2106 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 00:43:58.410448 kubelet[2106]: W1101 00:43:58.410412 2106 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 00:43:59.386981 systemd[1]: Reloading. Nov 1 00:43:59.425827 /usr/lib/systemd/system-generators/torcx-generator[2438]: time="2025-11-01T00:43:59Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:43:59.425844 /usr/lib/systemd/system-generators/torcx-generator[2438]: time="2025-11-01T00:43:59Z" level=info msg="torcx already run" Nov 1 00:43:59.482157 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:43:59.482166 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:43:59.494562 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:43:59.550000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:59.578581 kernel: kauditd_printk_skb: 581 callbacks suppressed Nov 1 00:43:59.578617 kernel: audit: type=1400 audit(1761957839.550:538): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:59.550000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:59.703798 kernel: audit: type=1400 audit(1761957839.550:539): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:59.703851 kernel: audit: type=1400 audit(1761957839.550:540): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:59.550000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:59.767704 kernel: audit: type=1400 audit(1761957839.550:541): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:59.550000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:59.832194 kernel: audit: type=1400 audit(1761957839.550:542): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:59.550000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:59.896446 kernel: audit: type=1400 audit(1761957839.550:543): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:59.550000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:59.959568 kernel: audit: type=1400 audit(1761957839.550:544): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:59.550000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.022806 kernel: audit: type=1400 audit(1761957839.550:545): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:59.550000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.085968 kernel: audit: type=1400 audit(1761957839.550:546): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:59.550000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.148762 kernel: audit: type=1400 audit(1761957839.639:547): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:59.639000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:59.639000 audit: BPF prog-id=79 op=LOAD Nov 1 00:43:59.639000 audit: BPF prog-id=63 op=UNLOAD Nov 1 00:43:59.640000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:59.640000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:59.640000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:59.640000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:59.640000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:59.640000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:59.640000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:59.640000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:59.640000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:59.767000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:59.767000 audit: BPF prog-id=80 op=LOAD Nov 1 00:43:59.767000 audit: BPF prog-id=40 op=UNLOAD Nov 1 00:43:59.767000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:59.767000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:59.767000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:59.767000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:59.767000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:59.767000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:59.767000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:59.767000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:59.894000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:59.894000 audit: BPF prog-id=81 op=LOAD Nov 1 00:43:59.894000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:59.894000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:59.894000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:59.894000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:59.894000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:59.894000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:59.894000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:59.894000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.084000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.084000 audit: BPF prog-id=82 op=LOAD Nov 1 00:44:00.084000 audit: BPF prog-id=41 op=UNLOAD Nov 1 00:44:00.084000 audit: BPF prog-id=42 op=UNLOAD Nov 1 00:44:00.084000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.084000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.084000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.084000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.084000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.084000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.084000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.084000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.084000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.210000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.211000 audit: BPF prog-id=83 op=LOAD Nov 1 00:44:00.211000 audit: BPF prog-id=59 op=UNLOAD Nov 1 00:44:00.211000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.211000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.211000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.211000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.211000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.211000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.211000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.211000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.211000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.211000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.211000 audit: BPF prog-id=84 op=LOAD Nov 1 00:44:00.211000 audit: BPF prog-id=71 op=UNLOAD Nov 1 00:44:00.212000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.212000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.212000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.212000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.212000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.212000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.212000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.212000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.212000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.212000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.212000 audit: BPF prog-id=85 op=LOAD Nov 1 00:44:00.212000 audit: BPF prog-id=43 op=UNLOAD Nov 1 00:44:00.212000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.212000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.212000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.212000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.212000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.212000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.212000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.212000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.212000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.212000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.212000 audit: BPF prog-id=86 op=LOAD Nov 1 00:44:00.213000 audit: BPF prog-id=55 op=UNLOAD Nov 1 00:44:00.213000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.213000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.213000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.213000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.213000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.213000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.213000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.213000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.213000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.213000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.213000 audit: BPF prog-id=87 op=LOAD Nov 1 00:44:00.213000 audit: BPF prog-id=44 op=UNLOAD Nov 1 00:44:00.213000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.213000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.213000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.213000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.213000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.213000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.213000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.213000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.213000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.213000 audit: BPF prog-id=88 op=LOAD Nov 1 00:44:00.213000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.213000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.213000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.213000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.213000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.213000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.213000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.213000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.213000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.213000 audit: BPF prog-id=89 op=LOAD Nov 1 00:44:00.213000 audit: BPF prog-id=45 op=UNLOAD Nov 1 00:44:00.213000 audit: BPF prog-id=46 op=UNLOAD Nov 1 00:44:00.213000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.213000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.213000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.213000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.213000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.213000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.213000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.213000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.213000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.214000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.214000 audit: BPF prog-id=90 op=LOAD Nov 1 00:44:00.214000 audit: BPF prog-id=47 op=UNLOAD Nov 1 00:44:00.214000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.214000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.214000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.214000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.214000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.214000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.214000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.214000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.214000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.214000 audit: BPF prog-id=91 op=LOAD Nov 1 00:44:00.214000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.214000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.214000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.214000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.214000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.214000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.214000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.214000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.214000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.214000 audit: BPF prog-id=92 op=LOAD Nov 1 00:44:00.214000 audit: BPF prog-id=48 op=UNLOAD Nov 1 00:44:00.214000 audit: BPF prog-id=49 op=UNLOAD Nov 1 00:44:00.214000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.214000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.214000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.214000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.214000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.214000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.214000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.214000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.214000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.214000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.214000 audit: BPF prog-id=93 op=LOAD Nov 1 00:44:00.214000 audit: BPF prog-id=75 op=UNLOAD Nov 1 00:44:00.215000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.215000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.215000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.215000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.215000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.215000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.215000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.215000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.215000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.215000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.215000 audit: BPF prog-id=94 op=LOAD Nov 1 00:44:00.215000 audit: BPF prog-id=50 op=UNLOAD Nov 1 00:44:00.215000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.215000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.215000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.215000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.215000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.215000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.215000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.215000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.215000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.215000 audit: BPF prog-id=95 op=LOAD Nov 1 00:44:00.215000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.215000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.215000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.215000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.215000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.215000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.215000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.215000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.215000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.215000 audit: BPF prog-id=96 op=LOAD Nov 1 00:44:00.215000 audit: BPF prog-id=51 op=UNLOAD Nov 1 00:44:00.215000 audit: BPF prog-id=52 op=UNLOAD Nov 1 00:44:00.216000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.216000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.216000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.216000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.216000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.216000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.216000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.216000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.216000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.216000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.216000 audit: BPF prog-id=97 op=LOAD Nov 1 00:44:00.216000 audit: BPF prog-id=53 op=UNLOAD Nov 1 00:44:00.216000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.216000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.216000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.216000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.216000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.216000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.216000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.216000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.216000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.216000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.216000 audit: BPF prog-id=98 op=LOAD Nov 1 00:44:00.216000 audit: BPF prog-id=67 op=UNLOAD Nov 1 00:44:00.217000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.217000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.218000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.218000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.218000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.218000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.218000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.218000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.218000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.218000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.218000 audit: BPF prog-id=99 op=LOAD Nov 1 00:44:00.218000 audit: BPF prog-id=54 op=UNLOAD Nov 1 00:44:00.224387 systemd[1]: Stopping kubelet.service... Nov 1 00:44:00.248876 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:44:00.248986 systemd[1]: Stopped kubelet.service. Nov 1 00:44:00.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:00.249869 systemd[1]: Starting kubelet.service... Nov 1 00:44:00.472070 systemd[1]: Started kubelet.service. Nov 1 00:44:00.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:00.493903 kubelet[2501]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:44:00.493903 kubelet[2501]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:44:00.493903 kubelet[2501]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:44:00.494170 kubelet[2501]: I1101 00:44:00.493938 2501 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:44:00.498312 kubelet[2501]: I1101 00:44:00.498296 2501 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 00:44:00.498312 kubelet[2501]: I1101 00:44:00.498308 2501 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:44:00.498453 kubelet[2501]: I1101 00:44:00.498447 2501 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 00:44:00.499310 kubelet[2501]: I1101 00:44:00.499302 2501 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 1 00:44:00.500672 kubelet[2501]: I1101 00:44:00.500662 2501 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:44:00.502416 kubelet[2501]: E1101 00:44:00.502401 2501 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:44:00.502452 kubelet[2501]: I1101 00:44:00.502417 2501 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:44:00.520127 kubelet[2501]: I1101 00:44:00.520084 2501 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:44:00.520249 kubelet[2501]: I1101 00:44:00.520209 2501 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:44:00.520342 kubelet[2501]: I1101 00:44:00.520224 2501 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-3bc793b712","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:44:00.520342 kubelet[2501]: I1101 00:44:00.520323 2501 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:44:00.520342 kubelet[2501]: I1101 00:44:00.520330 2501 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 00:44:00.520438 kubelet[2501]: I1101 00:44:00.520355 2501 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:44:00.520457 kubelet[2501]: I1101 00:44:00.520447 2501 kubelet.go:446] "Attempting to sync node with API server" Nov 1 00:44:00.520475 kubelet[2501]: I1101 00:44:00.520458 2501 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:44:00.520475 kubelet[2501]: I1101 00:44:00.520469 2501 kubelet.go:352] "Adding apiserver pod source" Nov 1 00:44:00.520475 kubelet[2501]: I1101 00:44:00.520474 2501 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:44:00.520925 kubelet[2501]: I1101 00:44:00.520915 2501 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 00:44:00.521243 kubelet[2501]: I1101 00:44:00.521236 2501 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 00:44:00.521560 kubelet[2501]: I1101 00:44:00.521553 2501 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:44:00.521595 kubelet[2501]: I1101 00:44:00.521574 2501 server.go:1287] "Started kubelet" Nov 1 00:44:00.521634 kubelet[2501]: I1101 00:44:00.521614 2501 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:44:00.521673 kubelet[2501]: I1101 00:44:00.521648 2501 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:44:00.521801 kubelet[2501]: I1101 00:44:00.521783 2501 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:44:00.520000 audit[2501]: AVC avc: denied { mac_admin } for pid=2501 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.520000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:44:00.520000 audit[2501]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000d26fc0 a1=c000d28630 a2=c000d26f90 a3=25 items=0 ppid=1 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:00.520000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:44:00.520000 audit[2501]: AVC avc: denied { mac_admin } for pid=2501 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.520000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:44:00.520000 audit[2501]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000d402c0 a1=c000d28648 a2=c000d27050 a3=25 items=0 ppid=1 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:00.520000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:44:00.522727 kubelet[2501]: I1101 00:44:00.522384 2501 kubelet.go:1507] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins_registry: invalid argument" Nov 1 00:44:00.522727 kubelet[2501]: I1101 00:44:00.522410 2501 kubelet.go:1511] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins: invalid argument" Nov 1 00:44:00.522727 kubelet[2501]: I1101 00:44:00.522430 2501 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:44:00.522727 kubelet[2501]: I1101 00:44:00.522443 2501 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:44:00.522727 kubelet[2501]: I1101 00:44:00.522499 2501 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:44:00.522727 kubelet[2501]: E1101 00:44:00.522587 2501 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-3bc793b712\" not found" Nov 1 00:44:00.522947 kubelet[2501]: I1101 00:44:00.522929 2501 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:44:00.523169 kubelet[2501]: I1101 00:44:00.523156 2501 factory.go:221] Registration of the systemd container factory successfully Nov 1 00:44:00.523246 kubelet[2501]: I1101 00:44:00.523234 2501 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:44:00.523292 kubelet[2501]: I1101 00:44:00.523251 2501 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:44:00.523348 kubelet[2501]: I1101 00:44:00.523338 2501 server.go:479] "Adding debug handlers to kubelet server" Nov 1 00:44:00.523620 kubelet[2501]: E1101 00:44:00.523607 2501 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:44:00.524302 kubelet[2501]: I1101 00:44:00.524292 2501 factory.go:221] Registration of the containerd container factory successfully Nov 1 00:44:00.527939 kubelet[2501]: I1101 00:44:00.527920 2501 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 00:44:00.528430 kubelet[2501]: I1101 00:44:00.528417 2501 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 00:44:00.528474 kubelet[2501]: I1101 00:44:00.528437 2501 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 00:44:00.528474 kubelet[2501]: I1101 00:44:00.528452 2501 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:44:00.528474 kubelet[2501]: I1101 00:44:00.528464 2501 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 00:44:00.528559 kubelet[2501]: E1101 00:44:00.528509 2501 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:44:00.539498 kubelet[2501]: I1101 00:44:00.539480 2501 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:44:00.539498 kubelet[2501]: I1101 00:44:00.539489 2501 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:44:00.539498 kubelet[2501]: I1101 00:44:00.539504 2501 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:44:00.539608 kubelet[2501]: I1101 00:44:00.539598 2501 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 00:44:00.539634 kubelet[2501]: I1101 00:44:00.539604 2501 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 00:44:00.539634 kubelet[2501]: I1101 00:44:00.539616 2501 policy_none.go:49] "None policy: Start" Nov 1 00:44:00.539634 kubelet[2501]: I1101 00:44:00.539621 2501 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:44:00.539634 kubelet[2501]: I1101 00:44:00.539627 2501 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:44:00.539706 kubelet[2501]: I1101 00:44:00.539683 2501 state_mem.go:75] "Updated machine memory state" Nov 1 00:44:00.541396 kubelet[2501]: I1101 00:44:00.541351 2501 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 00:44:00.541396 kubelet[2501]: I1101 00:44:00.541379 2501 server.go:94] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/device-plugins/: invalid argument" Nov 1 00:44:00.539000 audit[2501]: AVC avc: denied { mac_admin } for pid=2501 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:00.539000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:44:00.539000 audit[2501]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0005a8ba0 a1=c0012d4f30 a2=c0005a8960 a3=25 items=0 ppid=1 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:00.539000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:44:00.541614 kubelet[2501]: I1101 00:44:00.541448 2501 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:44:00.541614 kubelet[2501]: I1101 00:44:00.541454 2501 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:44:00.541614 kubelet[2501]: I1101 00:44:00.541556 2501 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:44:00.541823 kubelet[2501]: E1101 00:44:00.541813 2501 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:44:00.629564 kubelet[2501]: I1101 00:44:00.629450 2501 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-3bc793b712" Nov 1 00:44:00.629850 kubelet[2501]: I1101 00:44:00.629619 2501 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-3bc793b712" Nov 1 00:44:00.629850 kubelet[2501]: I1101 00:44:00.629672 2501 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-3bc793b712" Nov 1 00:44:00.638410 kubelet[2501]: W1101 00:44:00.638350 2501 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 00:44:00.638677 kubelet[2501]: W1101 00:44:00.638439 2501 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 00:44:00.638677 kubelet[2501]: E1101 00:44:00.638470 2501 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-n-3bc793b712\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-3bc793b712" Nov 1 00:44:00.638677 kubelet[2501]: E1101 00:44:00.638598 2501 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-3bc793b712\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.8-n-3bc793b712" Nov 1 00:44:00.639047 kubelet[2501]: W1101 00:44:00.638729 2501 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 00:44:00.639047 kubelet[2501]: E1101 00:44:00.638874 2501 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-3bc793b712\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-n-3bc793b712" Nov 1 00:44:00.646422 kubelet[2501]: I1101 00:44:00.646392 2501 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:00.673532 kubelet[2501]: I1101 00:44:00.673516 2501 kubelet_node_status.go:124] "Node was previously registered" node="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:00.673621 kubelet[2501]: I1101 00:44:00.673568 2501 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:00.723518 kubelet[2501]: I1101 00:44:00.723399 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3619d1d26841327f0b5a8ab2e479624c-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-3bc793b712\" (UID: \"3619d1d26841327f0b5a8ab2e479624c\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-3bc793b712" Nov 1 00:44:00.823990 kubelet[2501]: I1101 00:44:00.823877 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3619d1d26841327f0b5a8ab2e479624c-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-3bc793b712\" (UID: \"3619d1d26841327f0b5a8ab2e479624c\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-3bc793b712" Nov 1 00:44:00.823990 kubelet[2501]: I1101 00:44:00.823962 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3619d1d26841327f0b5a8ab2e479624c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-3bc793b712\" (UID: \"3619d1d26841327f0b5a8ab2e479624c\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-3bc793b712" Nov 1 00:44:00.824347 kubelet[2501]: I1101 00:44:00.824025 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8676602d0f69db7ae88cb859f416708d-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-3bc793b712\" (UID: \"8676602d0f69db7ae88cb859f416708d\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-3bc793b712" Nov 1 00:44:00.824347 kubelet[2501]: I1101 00:44:00.824070 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8676602d0f69db7ae88cb859f416708d-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-3bc793b712\" (UID: \"8676602d0f69db7ae88cb859f416708d\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-3bc793b712" Nov 1 00:44:00.824347 kubelet[2501]: I1101 00:44:00.824116 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8676602d0f69db7ae88cb859f416708d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-3bc793b712\" (UID: \"8676602d0f69db7ae88cb859f416708d\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-3bc793b712" Nov 1 00:44:00.824347 kubelet[2501]: I1101 00:44:00.824159 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e211b67e53f2c38825be43c94f461cba-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-3bc793b712\" (UID: \"e211b67e53f2c38825be43c94f461cba\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-3bc793b712" Nov 1 00:44:00.824347 kubelet[2501]: I1101 00:44:00.824238 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8676602d0f69db7ae88cb859f416708d-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-3bc793b712\" (UID: \"8676602d0f69db7ae88cb859f416708d\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-3bc793b712" Nov 1 00:44:00.824839 kubelet[2501]: I1101 00:44:00.824441 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8676602d0f69db7ae88cb859f416708d-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-3bc793b712\" (UID: \"8676602d0f69db7ae88cb859f416708d\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-3bc793b712" Nov 1 00:44:01.320000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/opt/libexec/kubernetes/kubelet-plugins/volume/exec" dev="sdb9" ino=6272 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:usr_t:s0 tclass=dir permissive=0 Nov 1 00:44:01.320000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=7 a1=c000624b40 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:44:01.320000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:44:01.521880 kubelet[2501]: I1101 00:44:01.521773 2501 apiserver.go:52] "Watching apiserver" Nov 1 00:44:01.534074 kubelet[2501]: I1101 00:44:01.534019 2501 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-3bc793b712" Nov 1 00:44:01.534344 kubelet[2501]: I1101 00:44:01.534236 2501 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-3bc793b712" Nov 1 00:44:01.542275 kubelet[2501]: W1101 00:44:01.542227 2501 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 00:44:01.542543 kubelet[2501]: E1101 00:44:01.542328 2501 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-3bc793b712\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.8-n-3bc793b712" Nov 1 00:44:01.542741 kubelet[2501]: W1101 00:44:01.542707 2501 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 00:44:01.542911 kubelet[2501]: E1101 00:44:01.542796 2501 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-3bc793b712\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-n-3bc793b712" Nov 1 00:44:01.579179 kubelet[2501]: I1101 00:44:01.579011 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.8-n-3bc793b712" podStartSLOduration=3.578973415 podStartE2EDuration="3.578973415s" podCreationTimestamp="2025-11-01 00:43:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:44:01.57888448 +0000 UTC m=+1.103814528" watchObservedRunningTime="2025-11-01 00:44:01.578973415 +0000 UTC m=+1.103903457" Nov 1 00:44:01.587357 kubelet[2501]: I1101 00:44:01.587308 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-3bc793b712" podStartSLOduration=3.587289959 podStartE2EDuration="3.587289959s" podCreationTimestamp="2025-11-01 00:43:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:44:01.58726841 +0000 UTC m=+1.112198446" watchObservedRunningTime="2025-11-01 00:44:01.587289959 +0000 UTC m=+1.112220002" Nov 1 00:44:01.624268 kubelet[2501]: I1101 00:44:01.624214 2501 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:44:01.631089 kubelet[2501]: I1101 00:44:01.630987 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.8-n-3bc793b712" podStartSLOduration=3.630954676 podStartE2EDuration="3.630954676s" podCreationTimestamp="2025-11-01 00:43:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:44:01.614117033 +0000 UTC m=+1.139047112" watchObservedRunningTime="2025-11-01 00:44:01.630954676 +0000 UTC m=+1.155884740" Nov 1 00:44:03.470000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:44:03.470000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c000fe3aa0 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:44:03.470000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:44:03.471000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:44:03.471000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0000b38a0 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:44:03.471000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:44:03.472000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:44:03.472000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0000b38e0 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:44:03.472000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:44:03.472000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:44:03.472000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0000b3920 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:44:03.472000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:44:05.934766 kubelet[2501]: I1101 00:44:05.934693 2501 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 00:44:05.935615 env[1565]: time="2025-11-01T00:44:05.935485821Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 00:44:05.936227 kubelet[2501]: I1101 00:44:05.936009 2501 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 00:44:06.964330 systemd[1]: Created slice kubepods-besteffort-poddc0fbb90_27ef_4ad0_a118_cff895b45ba9.slice. Nov 1 00:44:06.967884 kubelet[2501]: I1101 00:44:06.967844 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dc0fbb90-27ef-4ad0-a118-cff895b45ba9-kube-proxy\") pod \"kube-proxy-5nltd\" (UID: \"dc0fbb90-27ef-4ad0-a118-cff895b45ba9\") " pod="kube-system/kube-proxy-5nltd" Nov 1 00:44:06.968270 kubelet[2501]: I1101 00:44:06.967896 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc0fbb90-27ef-4ad0-a118-cff895b45ba9-xtables-lock\") pod \"kube-proxy-5nltd\" (UID: \"dc0fbb90-27ef-4ad0-a118-cff895b45ba9\") " pod="kube-system/kube-proxy-5nltd" Nov 1 00:44:06.968270 kubelet[2501]: I1101 00:44:06.967926 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjhfb\" (UniqueName: \"kubernetes.io/projected/dc0fbb90-27ef-4ad0-a118-cff895b45ba9-kube-api-access-kjhfb\") pod \"kube-proxy-5nltd\" (UID: \"dc0fbb90-27ef-4ad0-a118-cff895b45ba9\") " pod="kube-system/kube-proxy-5nltd" Nov 1 00:44:06.968270 kubelet[2501]: I1101 00:44:06.967956 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc0fbb90-27ef-4ad0-a118-cff895b45ba9-lib-modules\") pod \"kube-proxy-5nltd\" (UID: \"dc0fbb90-27ef-4ad0-a118-cff895b45ba9\") " pod="kube-system/kube-proxy-5nltd" Nov 1 00:44:07.083754 kubelet[2501]: I1101 00:44:07.083661 2501 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 1 00:44:07.111084 systemd[1]: Created slice kubepods-besteffort-pod9e1b7930_0fa1_49b9_833e_499e6a18c1c3.slice. Nov 1 00:44:07.169455 kubelet[2501]: I1101 00:44:07.169323 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9e1b7930-0fa1-49b9-833e-499e6a18c1c3-var-lib-calico\") pod \"tigera-operator-7dcd859c48-5mb5f\" (UID: \"9e1b7930-0fa1-49b9-833e-499e6a18c1c3\") " pod="tigera-operator/tigera-operator-7dcd859c48-5mb5f" Nov 1 00:44:07.169455 kubelet[2501]: I1101 00:44:07.169445 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbgtq\" (UniqueName: \"kubernetes.io/projected/9e1b7930-0fa1-49b9-833e-499e6a18c1c3-kube-api-access-nbgtq\") pod \"tigera-operator-7dcd859c48-5mb5f\" (UID: \"9e1b7930-0fa1-49b9-833e-499e6a18c1c3\") " pod="tigera-operator/tigera-operator-7dcd859c48-5mb5f" Nov 1 00:44:07.278695 env[1565]: time="2025-11-01T00:44:07.278453736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5nltd,Uid:dc0fbb90-27ef-4ad0-a118-cff895b45ba9,Namespace:kube-system,Attempt:0,}" Nov 1 00:44:07.310283 env[1565]: time="2025-11-01T00:44:07.310140612Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:44:07.310283 env[1565]: time="2025-11-01T00:44:07.310240642Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:44:07.310708 env[1565]: time="2025-11-01T00:44:07.310283393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:44:07.310826 env[1565]: time="2025-11-01T00:44:07.310718544Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fdcfa8f04c2bf9dac5af261928bc0f5424988c081ae62bb6f82df49aca2ef507 pid=2591 runtime=io.containerd.runc.v2 Nov 1 00:44:07.337073 systemd[1]: Started cri-containerd-fdcfa8f04c2bf9dac5af261928bc0f5424988c081ae62bb6f82df49aca2ef507.scope. Nov 1 00:44:07.349000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.378006 kernel: kauditd_printk_skb: 263 callbacks suppressed Nov 1 00:44:07.378061 kernel: audit: type=1400 audit(1761957847.349:792): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.414717 env[1565]: time="2025-11-01T00:44:07.414692683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-5mb5f,Uid:9e1b7930-0fa1-49b9-833e-499e6a18c1c3,Namespace:tigera-operator,Attempt:0,}" Nov 1 00:44:07.421832 env[1565]: time="2025-11-01T00:44:07.421797674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:44:07.421832 env[1565]: time="2025-11-01T00:44:07.421824865Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:44:07.421832 env[1565]: time="2025-11-01T00:44:07.421831880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:44:07.421925 env[1565]: time="2025-11-01T00:44:07.421893947Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b94901d0c705ce97bd6c5951e9d22d1566eb299e97e78f3aa47c38ebdb1d6031 pid=2625 runtime=io.containerd.runc.v2 Nov 1 00:44:07.349000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.441878 systemd[1]: Started cri-containerd-b94901d0c705ce97bd6c5951e9d22d1566eb299e97e78f3aa47c38ebdb1d6031.scope. Nov 1 00:44:07.503352 kernel: audit: type=1400 audit(1761957847.349:793): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.503402 kernel: audit: type=1400 audit(1761957847.349:794): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.349000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.566974 kernel: audit: type=1400 audit(1761957847.349:795): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.349000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.631021 kernel: audit: type=1400 audit(1761957847.349:796): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.349000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.695054 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Nov 1 00:44:07.695079 kernel: audit: type=1400 audit(1761957847.349:797): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.349000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.722127 kernel: audit: audit_lost=1 audit_rate_limit=0 audit_backlog_limit=64 Nov 1 00:44:07.722159 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Nov 1 00:44:07.722172 kernel: audit: audit_lost=2 audit_rate_limit=0 audit_backlog_limit=64 Nov 1 00:44:07.349000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.349000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.349000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.440000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.440000 audit: BPF prog-id=100 op=LOAD Nov 1 00:44:07.440000 audit[2601]: AVC avc: denied { bpf } for pid=2601 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.440000 audit[2601]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=2591 pid=2601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:07.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664636661386630346332626639646163356166323631393238626330 Nov 1 00:44:07.440000 audit[2601]: AVC avc: denied { perfmon } for pid=2601 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.440000 audit[2601]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=2591 pid=2601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:07.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664636661386630346332626639646163356166323631393238626330 Nov 1 00:44:07.440000 audit[2601]: AVC avc: denied { bpf } for pid=2601 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.440000 audit[2601]: AVC avc: denied { bpf } for pid=2601 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.440000 audit[2601]: AVC avc: denied { bpf } for pid=2601 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.440000 audit[2601]: AVC avc: denied { perfmon } for pid=2601 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.440000 audit[2601]: AVC avc: denied { perfmon } for pid=2601 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.440000 audit[2601]: AVC avc: denied { perfmon } for pid=2601 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.440000 audit[2601]: AVC avc: denied { perfmon } for pid=2601 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.440000 audit[2601]: AVC avc: denied { perfmon } for pid=2601 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.440000 audit[2601]: AVC avc: denied { bpf } for pid=2601 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.447000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.447000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.447000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.447000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.447000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.447000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.447000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.447000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.447000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.440000 audit[2601]: AVC avc: denied { bpf } for pid=2601 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.440000 audit: BPF prog-id=101 op=LOAD Nov 1 00:44:07.440000 audit[2601]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c0001dd4c0 items=0 ppid=2591 pid=2601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:07.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664636661386630346332626639646163356166323631393238626330 Nov 1 00:44:07.566000 audit[2601]: AVC avc: denied { bpf } for pid=2601 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.566000 audit[2601]: AVC avc: denied { bpf } for pid=2601 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.566000 audit[2601]: AVC avc: denied { perfmon } for pid=2601 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.566000 audit[2601]: AVC avc: denied { perfmon } for pid=2601 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.566000 audit[2601]: AVC avc: denied { perfmon } for pid=2601 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.566000 audit[2601]: AVC avc: denied { perfmon } for pid=2601 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.566000 audit[2601]: AVC avc: denied { perfmon } for pid=2601 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.566000 audit[2601]: AVC avc: denied { bpf } for pid=2601 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.630000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.630000 audit: BPF prog-id=102 op=LOAD Nov 1 00:44:07.630000 audit[2634]: AVC avc: denied { bpf } for pid=2634 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.630000 audit[2634]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=2625 pid=2634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:07.630000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239343930316430633730356365393762643663353935316539643232 Nov 1 00:44:07.630000 audit[2634]: AVC avc: denied { perfmon } for pid=2634 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.630000 audit[2634]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=2625 pid=2634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:07.630000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239343930316430633730356365393762643663353935316539643232 Nov 1 00:44:07.630000 audit[2634]: AVC avc: denied { bpf } for pid=2634 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.630000 audit[2634]: AVC avc: denied { bpf } for pid=2634 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.630000 audit[2634]: AVC avc: denied { bpf } for pid=2634 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.630000 audit[2634]: AVC avc: denied { perfmon } for pid=2634 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.630000 audit[2634]: AVC avc: denied { perfmon } for pid=2634 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.630000 audit[2634]: AVC avc: denied { perfmon } for pid=2634 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.630000 audit[2634]: AVC avc: denied { perfmon } for pid=2634 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.630000 audit[2634]: AVC avc: denied { perfmon } for pid=2634 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.630000 audit[2634]: AVC avc: denied { bpf } for pid=2634 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.566000 audit[2601]: AVC avc: denied { bpf } for pid=2601 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.630000 audit[2634]: AVC avc: denied { bpf } for pid=2634 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.630000 audit[2634]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c0003ec260 items=0 ppid=2625 pid=2634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:07.630000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239343930316430633730356365393762643663353935316539643232 Nov 1 00:44:07.785000 audit[2634]: AVC avc: denied { bpf } for pid=2634 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.785000 audit[2634]: AVC avc: denied { bpf } for pid=2634 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.785000 audit[2634]: AVC avc: denied { perfmon } for pid=2634 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.785000 audit[2634]: AVC avc: denied { perfmon } for pid=2634 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.785000 audit[2634]: AVC avc: denied { perfmon } for pid=2634 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.785000 audit[2634]: AVC avc: denied { perfmon } for pid=2634 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.785000 audit[2634]: AVC avc: denied { perfmon } for pid=2634 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.785000 audit[2634]: AVC avc: denied { bpf } for pid=2634 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.785000 audit[2634]: AVC avc: denied { bpf } for pid=2634 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.785000 audit: BPF prog-id=105 op=LOAD Nov 1 00:44:07.785000 audit[2634]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c0003ec2a8 items=0 ppid=2625 pid=2634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:07.785000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239343930316430633730356365393762643663353935316539643232 Nov 1 00:44:07.840000 audit: BPF prog-id=105 op=UNLOAD Nov 1 00:44:07.840000 audit: BPF prog-id=104 op=UNLOAD Nov 1 00:44:07.840000 audit[2634]: AVC avc: denied { bpf } for pid=2634 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.840000 audit[2634]: AVC avc: denied { bpf } for pid=2634 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.840000 audit[2634]: AVC avc: denied { bpf } for pid=2634 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.840000 audit[2634]: AVC avc: denied { perfmon } for pid=2634 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.840000 audit[2634]: AVC avc: denied { perfmon } for pid=2634 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.840000 audit[2634]: AVC avc: denied { perfmon } for pid=2634 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.840000 audit[2634]: AVC avc: denied { perfmon } for pid=2634 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.840000 audit[2634]: AVC avc: denied { perfmon } for pid=2634 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.840000 audit[2634]: AVC avc: denied { bpf } for pid=2634 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.566000 audit[2601]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c0001dd508 items=0 ppid=2591 pid=2601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:07.566000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664636661386630346332626639646163356166323631393238626330 Nov 1 00:44:07.867000 audit: BPF prog-id=103 op=UNLOAD Nov 1 00:44:07.867000 audit: BPF prog-id=101 op=UNLOAD Nov 1 00:44:07.867000 audit[2601]: AVC avc: denied { bpf } for pid=2601 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.867000 audit[2601]: AVC avc: denied { bpf } for pid=2601 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.840000 audit[2634]: AVC avc: denied { bpf } for pid=2634 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.840000 audit: BPF prog-id=106 op=LOAD Nov 1 00:44:07.840000 audit[2634]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c0003ec6b8 items=0 ppid=2625 pid=2634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:07.840000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239343930316430633730356365393762643663353935316539643232 Nov 1 00:44:07.867000 audit[2601]: AVC avc: denied { bpf } for pid=2601 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.867000 audit[2601]: AVC avc: denied { perfmon } for pid=2601 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.867000 audit[2601]: AVC avc: denied { perfmon } for pid=2601 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.867000 audit[2601]: AVC avc: denied { perfmon } for pid=2601 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.867000 audit[2601]: AVC avc: denied { perfmon } for pid=2601 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.867000 audit[2601]: AVC avc: denied { perfmon } for pid=2601 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.867000 audit[2601]: AVC avc: denied { bpf } for pid=2601 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.867000 audit[2601]: AVC avc: denied { bpf } for pid=2601 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.867000 audit: BPF prog-id=107 op=LOAD Nov 1 00:44:07.867000 audit[2601]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c0001dd918 items=0 ppid=2591 pid=2601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:07.867000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664636661386630346332626639646163356166323631393238626330 Nov 1 00:44:07.872725 env[1565]: time="2025-11-01T00:44:07.872698585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5nltd,Uid:dc0fbb90-27ef-4ad0-a118-cff895b45ba9,Namespace:kube-system,Attempt:0,} returns sandbox id \"fdcfa8f04c2bf9dac5af261928bc0f5424988c081ae62bb6f82df49aca2ef507\"" Nov 1 00:44:07.873846 env[1565]: time="2025-11-01T00:44:07.873828603Z" level=info msg="CreateContainer within sandbox \"fdcfa8f04c2bf9dac5af261928bc0f5424988c081ae62bb6f82df49aca2ef507\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 00:44:07.879289 env[1565]: time="2025-11-01T00:44:07.879270902Z" level=info msg="CreateContainer within sandbox \"fdcfa8f04c2bf9dac5af261928bc0f5424988c081ae62bb6f82df49aca2ef507\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cc01665f7a4df869c71b18108c3697253f983da88cb97b2d474c939235430c1a\"" Nov 1 00:44:07.879515 env[1565]: time="2025-11-01T00:44:07.879499694Z" level=info msg="StartContainer for \"cc01665f7a4df869c71b18108c3697253f983da88cb97b2d474c939235430c1a\"" Nov 1 00:44:07.885002 env[1565]: time="2025-11-01T00:44:07.884975117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-5mb5f,Uid:9e1b7930-0fa1-49b9-833e-499e6a18c1c3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b94901d0c705ce97bd6c5951e9d22d1566eb299e97e78f3aa47c38ebdb1d6031\"" Nov 1 00:44:07.885816 env[1565]: time="2025-11-01T00:44:07.885778371Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 1 00:44:07.887956 systemd[1]: Started cri-containerd-cc01665f7a4df869c71b18108c3697253f983da88cb97b2d474c939235430c1a.scope. Nov 1 00:44:07.894000 audit[2672]: AVC avc: denied { perfmon } for pid=2672 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.894000 audit[2672]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001476b0 a2=3c a3=7fec6be422b8 items=0 ppid=2591 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:07.894000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363303136363566376134646638363963373162313831303863333639 Nov 1 00:44:07.894000 audit[2672]: AVC avc: denied { bpf } for pid=2672 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.894000 audit[2672]: AVC avc: denied { bpf } for pid=2672 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.894000 audit[2672]: AVC avc: denied { bpf } for pid=2672 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.894000 audit[2672]: AVC avc: denied { perfmon } for pid=2672 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.894000 audit[2672]: AVC avc: denied { perfmon } for pid=2672 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.894000 audit[2672]: AVC avc: denied { perfmon } for pid=2672 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.894000 audit[2672]: AVC avc: denied { perfmon } for pid=2672 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.894000 audit[2672]: AVC avc: denied { perfmon } for pid=2672 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.894000 audit[2672]: AVC avc: denied { bpf } for pid=2672 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.894000 audit[2672]: AVC avc: denied { bpf } for pid=2672 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.894000 audit: BPF prog-id=108 op=LOAD Nov 1 00:44:07.894000 audit[2672]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001479d8 a2=78 a3=c0002c3c88 items=0 ppid=2591 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:07.894000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363303136363566376134646638363963373162313831303863333639 Nov 1 00:44:07.894000 audit[2672]: AVC avc: denied { bpf } for pid=2672 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.894000 audit[2672]: AVC avc: denied { bpf } for pid=2672 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.894000 audit[2672]: AVC avc: denied { perfmon } for pid=2672 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.894000 audit[2672]: AVC avc: denied { perfmon } for pid=2672 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.894000 audit[2672]: AVC avc: denied { perfmon } for pid=2672 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.894000 audit[2672]: AVC avc: denied { perfmon } for pid=2672 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.894000 audit[2672]: AVC avc: denied { perfmon } for pid=2672 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.894000 audit[2672]: AVC avc: denied { bpf } for pid=2672 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.894000 audit[2672]: AVC avc: denied { bpf } for pid=2672 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.894000 audit: BPF prog-id=109 op=LOAD Nov 1 00:44:07.894000 audit[2672]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000147770 a2=78 a3=c0002c3cd8 items=0 ppid=2591 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:07.894000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363303136363566376134646638363963373162313831303863333639 Nov 1 00:44:07.894000 audit: BPF prog-id=109 op=UNLOAD Nov 1 00:44:07.894000 audit: BPF prog-id=108 op=UNLOAD Nov 1 00:44:07.894000 audit[2672]: AVC avc: denied { bpf } for pid=2672 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.894000 audit[2672]: AVC avc: denied { bpf } for pid=2672 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.894000 audit[2672]: AVC avc: denied { bpf } for pid=2672 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.894000 audit[2672]: AVC avc: denied { perfmon } for pid=2672 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.894000 audit[2672]: AVC avc: denied { perfmon } for pid=2672 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.894000 audit[2672]: AVC avc: denied { perfmon } for pid=2672 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.894000 audit[2672]: AVC avc: denied { perfmon } for pid=2672 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.894000 audit[2672]: AVC avc: denied { perfmon } for pid=2672 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.894000 audit[2672]: AVC avc: denied { bpf } for pid=2672 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.894000 audit[2672]: AVC avc: denied { bpf } for pid=2672 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:07.894000 audit: BPF prog-id=110 op=LOAD Nov 1 00:44:07.894000 audit[2672]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000147c30 a2=78 a3=c0002c3d68 items=0 ppid=2591 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:07.894000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363303136363566376134646638363963373162313831303863333639 Nov 1 00:44:07.901779 env[1565]: time="2025-11-01T00:44:07.901752623Z" level=info msg="StartContainer for \"cc01665f7a4df869c71b18108c3697253f983da88cb97b2d474c939235430c1a\" returns successfully" Nov 1 00:44:07.970565 update_engine[1559]: I1101 00:44:07.970464 1559 update_attempter.cc:509] Updating boot flags... Nov 1 00:44:08.048000 audit[2761]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2761 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:44:08.048000 audit[2761]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff6acf5450 a2=0 a3=7fff6acf543c items=0 ppid=2683 pid=2761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.048000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Nov 1 00:44:08.048000 audit[2762]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2762 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:44:08.048000 audit[2762]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe1708dce0 a2=0 a3=7ffe1708dccc items=0 ppid=2683 pid=2762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.048000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Nov 1 00:44:08.050000 audit[2764]: NETFILTER_CFG table=nat:40 family=10 entries=1 op=nft_register_chain pid=2764 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:44:08.050000 audit[2764]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc92d36770 a2=0 a3=7ffc92d3675c items=0 ppid=2683 pid=2764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.050000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Nov 1 00:44:08.050000 audit[2763]: NETFILTER_CFG table=nat:41 family=2 entries=1 op=nft_register_chain pid=2763 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:44:08.050000 audit[2763]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffae362b10 a2=0 a3=7fffae362afc items=0 ppid=2683 pid=2763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.050000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Nov 1 00:44:08.052000 audit[2765]: NETFILTER_CFG table=filter:42 family=10 entries=1 op=nft_register_chain pid=2765 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:44:08.052000 audit[2765]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd6536fdf0 a2=0 a3=7ffd6536fddc items=0 ppid=2683 pid=2765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.052000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Nov 1 00:44:08.052000 audit[2766]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2766 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:44:08.052000 audit[2766]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffde8de84f0 a2=0 a3=7ffde8de84dc items=0 ppid=2683 pid=2766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.052000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Nov 1 00:44:08.154000 audit[2768]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2768 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:44:08.154000 audit[2768]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe34261d50 a2=0 a3=7ffe34261d3c items=0 ppid=2683 pid=2768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.154000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Nov 1 00:44:08.161000 audit[2770]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2770 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:44:08.161000 audit[2770]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffdb1198490 a2=0 a3=7ffdb119847c items=0 ppid=2683 pid=2770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.161000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Nov 1 00:44:08.172000 audit[2773]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2773 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:44:08.172000 audit[2773]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffde5e20880 a2=0 a3=7ffde5e2086c items=0 ppid=2683 pid=2773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.172000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Nov 1 00:44:08.175000 audit[2774]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2774 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:44:08.175000 audit[2774]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdb2490030 a2=0 a3=7ffdb249001c items=0 ppid=2683 pid=2774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.175000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Nov 1 00:44:08.182000 audit[2776]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2776 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:44:08.182000 audit[2776]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcec077bc0 a2=0 a3=7ffcec077bac items=0 ppid=2683 pid=2776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.182000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Nov 1 00:44:08.185000 audit[2777]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2777 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:44:08.185000 audit[2777]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc24466700 a2=0 a3=7ffc244666ec items=0 ppid=2683 pid=2777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.185000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Nov 1 00:44:08.192000 audit[2779]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2779 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:44:08.192000 audit[2779]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc8c6e3df0 a2=0 a3=7ffc8c6e3ddc items=0 ppid=2683 pid=2779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.192000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Nov 1 00:44:08.203000 audit[2782]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2782 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:44:08.203000 audit[2782]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff77095590 a2=0 a3=7fff7709557c items=0 ppid=2683 pid=2782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.203000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Nov 1 00:44:08.205000 audit[2783]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2783 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:44:08.205000 audit[2783]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcdcbac9a0 a2=0 a3=7ffcdcbac98c items=0 ppid=2683 pid=2783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.205000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Nov 1 00:44:08.212000 audit[2785]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2785 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:44:08.212000 audit[2785]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe8d554a60 a2=0 a3=7ffe8d554a4c items=0 ppid=2683 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.212000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Nov 1 00:44:08.214000 audit[2786]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2786 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:44:08.214000 audit[2786]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff2bd6c730 a2=0 a3=7fff2bd6c71c items=0 ppid=2683 pid=2786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.214000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Nov 1 00:44:08.221000 audit[2788]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2788 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:44:08.221000 audit[2788]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe9d6fbc90 a2=0 a3=7ffe9d6fbc7c items=0 ppid=2683 pid=2788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.221000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Nov 1 00:44:08.231000 audit[2791]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2791 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:44:08.231000 audit[2791]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd2195d2d0 a2=0 a3=7ffd2195d2bc items=0 ppid=2683 pid=2791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.231000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Nov 1 00:44:08.241000 audit[2794]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2794 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:44:08.241000 audit[2794]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffe1f445f0 a2=0 a3=7fffe1f445dc items=0 ppid=2683 pid=2794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.241000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Nov 1 00:44:08.243000 audit[2795]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2795 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:44:08.243000 audit[2795]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc3d44a950 a2=0 a3=7ffc3d44a93c items=0 ppid=2683 pid=2795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.243000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Nov 1 00:44:08.249000 audit[2797]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2797 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:44:08.249000 audit[2797]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc017926e0 a2=0 a3=7ffc017926cc items=0 ppid=2683 pid=2797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.249000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 00:44:08.258000 audit[2800]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2800 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:44:08.258000 audit[2800]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffff9bbe5b0 a2=0 a3=7ffff9bbe59c items=0 ppid=2683 pid=2800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.258000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 00:44:08.261000 audit[2801]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2801 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:44:08.261000 audit[2801]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffee99cd8b0 a2=0 a3=7ffee99cd89c items=0 ppid=2683 pid=2801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.261000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Nov 1 00:44:08.267000 audit[2803]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2803 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:44:08.267000 audit[2803]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffd9661d880 a2=0 a3=7ffd9661d86c items=0 ppid=2683 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.267000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Nov 1 00:44:08.332000 audit[2809]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2809 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:08.332000 audit[2809]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd87df3b00 a2=0 a3=7ffd87df3aec items=0 ppid=2683 pid=2809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.332000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:08.367000 audit[2809]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2809 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:08.367000 audit[2809]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffd87df3b00 a2=0 a3=7ffd87df3aec items=0 ppid=2683 pid=2809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.367000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:08.370000 audit[2814]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2814 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:44:08.370000 audit[2814]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fffc2326f30 a2=0 a3=7fffc2326f1c items=0 ppid=2683 pid=2814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.370000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Nov 1 00:44:08.376000 audit[2816]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2816 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:44:08.376000 audit[2816]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe482d7970 a2=0 a3=7ffe482d795c items=0 ppid=2683 pid=2816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.376000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Nov 1 00:44:08.386000 audit[2819]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2819 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:44:08.386000 audit[2819]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd8358e420 a2=0 a3=7ffd8358e40c items=0 ppid=2683 pid=2819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.386000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Nov 1 00:44:08.389000 audit[2820]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2820 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:44:08.389000 audit[2820]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffa205e560 a2=0 a3=7fffa205e54c items=0 ppid=2683 pid=2820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.389000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Nov 1 00:44:08.395000 audit[2822]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2822 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:44:08.395000 audit[2822]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff6f2cdbc0 a2=0 a3=7fff6f2cdbac items=0 ppid=2683 pid=2822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.395000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Nov 1 00:44:08.398000 audit[2823]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2823 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:44:08.398000 audit[2823]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc706013b0 a2=0 a3=7ffc7060139c items=0 ppid=2683 pid=2823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.398000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Nov 1 00:44:08.404000 audit[2825]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2825 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:44:08.404000 audit[2825]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc550d1290 a2=0 a3=7ffc550d127c items=0 ppid=2683 pid=2825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.404000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Nov 1 00:44:08.413000 audit[2828]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2828 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:44:08.413000 audit[2828]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fff7d552cc0 a2=0 a3=7fff7d552cac items=0 ppid=2683 pid=2828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.413000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Nov 1 00:44:08.416000 audit[2829]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2829 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:44:08.416000 audit[2829]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffff7d636a0 a2=0 a3=7ffff7d6368c items=0 ppid=2683 pid=2829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.416000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Nov 1 00:44:08.423000 audit[2831]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2831 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:44:08.423000 audit[2831]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc4b4c9850 a2=0 a3=7ffc4b4c983c items=0 ppid=2683 pid=2831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.423000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Nov 1 00:44:08.426000 audit[2832]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2832 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:44:08.426000 audit[2832]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe4fb90d50 a2=0 a3=7ffe4fb90d3c items=0 ppid=2683 pid=2832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.426000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Nov 1 00:44:08.432000 audit[2834]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2834 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:44:08.432000 audit[2834]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffce31f63b0 a2=0 a3=7ffce31f639c items=0 ppid=2683 pid=2834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.432000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Nov 1 00:44:08.442000 audit[2837]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2837 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:44:08.442000 audit[2837]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe80cd4c10 a2=0 a3=7ffe80cd4bfc items=0 ppid=2683 pid=2837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.442000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Nov 1 00:44:08.451000 audit[2840]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2840 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:44:08.451000 audit[2840]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe26c2fac0 a2=0 a3=7ffe26c2faac items=0 ppid=2683 pid=2840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.451000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Nov 1 00:44:08.454000 audit[2841]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2841 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:44:08.454000 audit[2841]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffec310a880 a2=0 a3=7ffec310a86c items=0 ppid=2683 pid=2841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.454000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Nov 1 00:44:08.460000 audit[2843]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2843 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:44:08.460000 audit[2843]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffc77438cb0 a2=0 a3=7ffc77438c9c items=0 ppid=2683 pid=2843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.460000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 00:44:08.469000 audit[2846]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2846 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:44:08.469000 audit[2846]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffc5acc40e0 a2=0 a3=7ffc5acc40cc items=0 ppid=2683 pid=2846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.469000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 00:44:08.471000 audit[2847]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2847 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:44:08.471000 audit[2847]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff13142400 a2=0 a3=7fff131423ec items=0 ppid=2683 pid=2847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.471000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Nov 1 00:44:08.477000 audit[2849]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2849 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:44:08.477000 audit[2849]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffcebbfd3a0 a2=0 a3=7ffcebbfd38c items=0 ppid=2683 pid=2849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.477000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Nov 1 00:44:08.480000 audit[2850]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2850 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:44:08.480000 audit[2850]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe77ddcf60 a2=0 a3=7ffe77ddcf4c items=0 ppid=2683 pid=2850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.480000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Nov 1 00:44:08.486000 audit[2852]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2852 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:44:08.486000 audit[2852]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc9d3a6c20 a2=0 a3=7ffc9d3a6c0c items=0 ppid=2683 pid=2852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.486000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Nov 1 00:44:08.496000 audit[2855]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2855 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:44:08.496000 audit[2855]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff5f0399c0 a2=0 a3=7fff5f0399ac items=0 ppid=2683 pid=2855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.496000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Nov 1 00:44:08.504000 audit[2857]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2857 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Nov 1 00:44:08.504000 audit[2857]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffc95e55e10 a2=0 a3=7ffc95e55dfc items=0 ppid=2683 pid=2857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.504000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:08.504000 audit[2857]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2857 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Nov 1 00:44:08.504000 audit[2857]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffc95e55e10 a2=0 a3=7ffc95e55dfc items=0 ppid=2683 pid=2857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.504000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:08.574044 kubelet[2501]: I1101 00:44:08.573892 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5nltd" podStartSLOduration=2.57383908 podStartE2EDuration="2.57383908s" podCreationTimestamp="2025-11-01 00:44:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:44:08.573591203 +0000 UTC m=+8.098521318" watchObservedRunningTime="2025-11-01 00:44:08.57383908 +0000 UTC m=+8.098769159" Nov 1 00:44:09.075250 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3394923352.mount: Deactivated successfully. Nov 1 00:44:09.602152 env[1565]: time="2025-11-01T00:44:09.602101905Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:44:09.602753 env[1565]: time="2025-11-01T00:44:09.602691700Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:44:09.603484 env[1565]: time="2025-11-01T00:44:09.603443728Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:44:09.604169 env[1565]: time="2025-11-01T00:44:09.604130866Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:44:09.604466 env[1565]: time="2025-11-01T00:44:09.604429172Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 1 00:44:09.605828 env[1565]: time="2025-11-01T00:44:09.605796587Z" level=info msg="CreateContainer within sandbox \"b94901d0c705ce97bd6c5951e9d22d1566eb299e97e78f3aa47c38ebdb1d6031\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 1 00:44:09.610218 env[1565]: time="2025-11-01T00:44:09.610200226Z" level=info msg="CreateContainer within sandbox \"b94901d0c705ce97bd6c5951e9d22d1566eb299e97e78f3aa47c38ebdb1d6031\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"bcb2cc1aa34607143509640c6bd6935ef54dba921dd3cd1fe40a1edf021ee386\"" Nov 1 00:44:09.610551 env[1565]: time="2025-11-01T00:44:09.610538058Z" level=info msg="StartContainer for \"bcb2cc1aa34607143509640c6bd6935ef54dba921dd3cd1fe40a1edf021ee386\"" Nov 1 00:44:09.611595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount643997234.mount: Deactivated successfully. Nov 1 00:44:09.635995 systemd[1]: Started cri-containerd-bcb2cc1aa34607143509640c6bd6935ef54dba921dd3cd1fe40a1edf021ee386.scope. Nov 1 00:44:09.639000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.639000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.639000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.639000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.639000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.639000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.639000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.639000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.639000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.639000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.639000 audit: BPF prog-id=111 op=LOAD Nov 1 00:44:09.639000 audit[2865]: AVC avc: denied { bpf } for pid=2865 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.639000 audit[2865]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000147c48 a2=10 a3=1c items=0 ppid=2625 pid=2865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.639000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263623263633161613334363037313433353039363430633662643639 Nov 1 00:44:09.639000 audit[2865]: AVC avc: denied { perfmon } for pid=2865 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.639000 audit[2865]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001476b0 a2=3c a3=8 items=0 ppid=2625 pid=2865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.639000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263623263633161613334363037313433353039363430633662643639 Nov 1 00:44:09.639000 audit[2865]: AVC avc: denied { bpf } for pid=2865 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.639000 audit[2865]: AVC avc: denied { bpf } for pid=2865 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.639000 audit[2865]: AVC avc: denied { bpf } for pid=2865 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.639000 audit[2865]: AVC avc: denied { perfmon } for pid=2865 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.639000 audit[2865]: AVC avc: denied { perfmon } for pid=2865 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.639000 audit[2865]: AVC avc: denied { perfmon } for pid=2865 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.639000 audit[2865]: AVC avc: denied { perfmon } for pid=2865 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.639000 audit[2865]: AVC avc: denied { perfmon } for pid=2865 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.639000 audit[2865]: AVC avc: denied { bpf } for pid=2865 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.639000 audit[2865]: AVC avc: denied { bpf } for pid=2865 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.639000 audit: BPF prog-id=112 op=LOAD Nov 1 00:44:09.639000 audit[2865]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001479d8 a2=78 a3=c000414050 items=0 ppid=2625 pid=2865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.639000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263623263633161613334363037313433353039363430633662643639 Nov 1 00:44:09.639000 audit[2865]: AVC avc: denied { bpf } for pid=2865 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.639000 audit[2865]: AVC avc: denied { bpf } for pid=2865 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.639000 audit[2865]: AVC avc: denied { perfmon } for pid=2865 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.639000 audit[2865]: AVC avc: denied { perfmon } for pid=2865 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.639000 audit[2865]: AVC avc: denied { perfmon } for pid=2865 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.639000 audit[2865]: AVC avc: denied { perfmon } for pid=2865 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.639000 audit[2865]: AVC avc: denied { perfmon } for pid=2865 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.639000 audit[2865]: AVC avc: denied { bpf } for pid=2865 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.639000 audit[2865]: AVC avc: denied { bpf } for pid=2865 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.639000 audit: BPF prog-id=113 op=LOAD Nov 1 00:44:09.639000 audit[2865]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000147770 a2=78 a3=c000414098 items=0 ppid=2625 pid=2865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.639000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263623263633161613334363037313433353039363430633662643639 Nov 1 00:44:09.639000 audit: BPF prog-id=113 op=UNLOAD Nov 1 00:44:09.639000 audit: BPF prog-id=112 op=UNLOAD Nov 1 00:44:09.639000 audit[2865]: AVC avc: denied { bpf } for pid=2865 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.639000 audit[2865]: AVC avc: denied { bpf } for pid=2865 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.639000 audit[2865]: AVC avc: denied { bpf } for pid=2865 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.639000 audit[2865]: AVC avc: denied { perfmon } for pid=2865 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.639000 audit[2865]: AVC avc: denied { perfmon } for pid=2865 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.639000 audit[2865]: AVC avc: denied { perfmon } for pid=2865 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.639000 audit[2865]: AVC avc: denied { perfmon } for pid=2865 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.639000 audit[2865]: AVC avc: denied { perfmon } for pid=2865 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.639000 audit[2865]: AVC avc: denied { bpf } for pid=2865 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.639000 audit[2865]: AVC avc: denied { bpf } for pid=2865 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.639000 audit: BPF prog-id=114 op=LOAD Nov 1 00:44:09.639000 audit[2865]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000147c30 a2=78 a3=c0004144a8 items=0 ppid=2625 pid=2865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.639000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263623263633161613334363037313433353039363430633662643639 Nov 1 00:44:09.647230 env[1565]: time="2025-11-01T00:44:09.647203190Z" level=info msg="StartContainer for \"bcb2cc1aa34607143509640c6bd6935ef54dba921dd3cd1fe40a1edf021ee386\" returns successfully" Nov 1 00:44:10.577912 kubelet[2501]: I1101 00:44:10.577850 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-5mb5f" podStartSLOduration=1.858321543 podStartE2EDuration="3.577840602s" podCreationTimestamp="2025-11-01 00:44:07 +0000 UTC" firstStartedPulling="2025-11-01 00:44:07.885531205 +0000 UTC m=+7.410461223" lastFinishedPulling="2025-11-01 00:44:09.605050268 +0000 UTC m=+9.129980282" observedRunningTime="2025-11-01 00:44:10.577681866 +0000 UTC m=+10.102611884" watchObservedRunningTime="2025-11-01 00:44:10.577840602 +0000 UTC m=+10.102770617" Nov 1 00:44:14.174376 sudo[1777]: pam_unix(sudo:session): session closed for user root Nov 1 00:44:14.172000 audit[1777]: USER_END pid=1777 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:44:14.175690 sshd[1774]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:14.177842 systemd[1]: sshd@8-145.40.82.49:22-147.75.109.163:59282.service: Deactivated successfully. Nov 1 00:44:14.178652 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 00:44:14.178749 systemd[1]: session-11.scope: Consumed 3.963s CPU time. Nov 1 00:44:14.179490 systemd-logind[1557]: Session 11 logged out. Waiting for processes to exit. Nov 1 00:44:14.180199 systemd-logind[1557]: Removed session 11. Nov 1 00:44:14.200844 kernel: kauditd_printk_skb: 361 callbacks suppressed Nov 1 00:44:14.200918 kernel: audit: type=1106 audit(1761957854.172:903): pid=1777 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:44:14.173000 audit[1777]: CRED_DISP pid=1777 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:44:14.372274 kernel: audit: type=1104 audit(1761957854.173:904): pid=1777 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:44:14.372354 kernel: audit: type=1106 audit(1761957854.174:905): pid=1774 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:14.174000 audit[1774]: USER_END pid=1774 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:14.175000 audit[1774]: CRED_DISP pid=1774 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:14.553064 kernel: audit: type=1104 audit(1761957854.175:906): pid=1774 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:14.553154 kernel: audit: type=1131 audit(1761957854.176:907): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-145.40.82.49:22-147.75.109.163:59282 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:14.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-145.40.82.49:22-147.75.109.163:59282 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:14.544000 audit[3021]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=3021 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:14.697831 kernel: audit: type=1325 audit(1761957854.544:908): table=filter:89 family=2 entries=15 op=nft_register_rule pid=3021 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:14.544000 audit[3021]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff32143a40 a2=0 a3=7fff32143a2c items=0 ppid=2683 pid=3021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:14.793552 kernel: audit: type=1300 audit(1761957854.544:908): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff32143a40 a2=0 a3=7fff32143a2c items=0 ppid=2683 pid=3021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:14.544000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:14.850545 kernel: audit: type=1327 audit(1761957854.544:908): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:14.855000 audit[3021]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=3021 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:14.914517 kernel: audit: type=1325 audit(1761957854.855:909): table=nat:90 family=2 entries=12 op=nft_register_rule pid=3021 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:14.914594 kernel: audit: type=1300 audit(1761957854.855:909): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff32143a40 a2=0 a3=0 items=0 ppid=2683 pid=3021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:14.855000 audit[3021]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff32143a40 a2=0 a3=0 items=0 ppid=2683 pid=3021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:14.855000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:15.011000 audit[3023]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=3023 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:15.011000 audit[3023]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff7c347c90 a2=0 a3=7fff7c347c7c items=0 ppid=2683 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:15.011000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:15.028000 audit[3023]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=3023 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:15.028000 audit[3023]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff7c347c90 a2=0 a3=0 items=0 ppid=2683 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:15.028000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:16.158000 audit[3025]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=3025 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:16.158000 audit[3025]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd5e36b140 a2=0 a3=7ffd5e36b12c items=0 ppid=2683 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:16.158000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:16.168000 audit[3025]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=3025 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:16.168000 audit[3025]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd5e36b140 a2=0 a3=0 items=0 ppid=2683 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:16.168000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:17.182000 audit[3027]: NETFILTER_CFG table=filter:95 family=2 entries=19 op=nft_register_rule pid=3027 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:17.182000 audit[3027]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd1a8ccf80 a2=0 a3=7ffd1a8ccf6c items=0 ppid=2683 pid=3027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:17.182000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:17.198000 audit[3027]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=3027 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:17.198000 audit[3027]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd1a8ccf80 a2=0 a3=0 items=0 ppid=2683 pid=3027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:17.198000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:18.166000 audit[3029]: NETFILTER_CFG table=filter:97 family=2 entries=21 op=nft_register_rule pid=3029 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:18.166000 audit[3029]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff95d55490 a2=0 a3=7fff95d5547c items=0 ppid=2683 pid=3029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:18.166000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:18.180000 audit[3029]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=3029 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:18.180000 audit[3029]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff95d55490 a2=0 a3=0 items=0 ppid=2683 pid=3029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:18.180000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:18.204279 systemd[1]: Created slice kubepods-besteffort-podbd444261_6ab3_4a08_babc_927b8a66329c.slice. Nov 1 00:44:18.247455 kubelet[2501]: I1101 00:44:18.247335 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmkg4\" (UniqueName: \"kubernetes.io/projected/bd444261-6ab3-4a08-babc-927b8a66329c-kube-api-access-qmkg4\") pod \"calico-typha-6f944f9fdf-nr2k9\" (UID: \"bd444261-6ab3-4a08-babc-927b8a66329c\") " pod="calico-system/calico-typha-6f944f9fdf-nr2k9" Nov 1 00:44:18.248296 kubelet[2501]: I1101 00:44:18.247462 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/bd444261-6ab3-4a08-babc-927b8a66329c-typha-certs\") pod \"calico-typha-6f944f9fdf-nr2k9\" (UID: \"bd444261-6ab3-4a08-babc-927b8a66329c\") " pod="calico-system/calico-typha-6f944f9fdf-nr2k9" Nov 1 00:44:18.248296 kubelet[2501]: I1101 00:44:18.247559 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd444261-6ab3-4a08-babc-927b8a66329c-tigera-ca-bundle\") pod \"calico-typha-6f944f9fdf-nr2k9\" (UID: \"bd444261-6ab3-4a08-babc-927b8a66329c\") " pod="calico-system/calico-typha-6f944f9fdf-nr2k9" Nov 1 00:44:18.434934 systemd[1]: Created slice kubepods-besteffort-podb8ff4200_8557_4abf_a40e_51e49af08b77.slice. Nov 1 00:44:18.449276 kubelet[2501]: I1101 00:44:18.449230 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b8ff4200-8557-4abf-a40e-51e49af08b77-policysync\") pod \"calico-node-k97x5\" (UID: \"b8ff4200-8557-4abf-a40e-51e49af08b77\") " pod="calico-system/calico-node-k97x5" Nov 1 00:44:18.449466 kubelet[2501]: I1101 00:44:18.449292 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8ff4200-8557-4abf-a40e-51e49af08b77-tigera-ca-bundle\") pod \"calico-node-k97x5\" (UID: \"b8ff4200-8557-4abf-a40e-51e49af08b77\") " pod="calico-system/calico-node-k97x5" Nov 1 00:44:18.449466 kubelet[2501]: I1101 00:44:18.449336 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b8ff4200-8557-4abf-a40e-51e49af08b77-var-lib-calico\") pod \"calico-node-k97x5\" (UID: \"b8ff4200-8557-4abf-a40e-51e49af08b77\") " pod="calico-system/calico-node-k97x5" Nov 1 00:44:18.449466 kubelet[2501]: I1101 00:44:18.449364 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8ff4200-8557-4abf-a40e-51e49af08b77-xtables-lock\") pod \"calico-node-k97x5\" (UID: \"b8ff4200-8557-4abf-a40e-51e49af08b77\") " pod="calico-system/calico-node-k97x5" Nov 1 00:44:18.449466 kubelet[2501]: I1101 00:44:18.449407 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b8ff4200-8557-4abf-a40e-51e49af08b77-cni-bin-dir\") pod \"calico-node-k97x5\" (UID: \"b8ff4200-8557-4abf-a40e-51e49af08b77\") " pod="calico-system/calico-node-k97x5" Nov 1 00:44:18.449466 kubelet[2501]: I1101 00:44:18.449436 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b8ff4200-8557-4abf-a40e-51e49af08b77-cni-net-dir\") pod \"calico-node-k97x5\" (UID: \"b8ff4200-8557-4abf-a40e-51e49af08b77\") " pod="calico-system/calico-node-k97x5" Nov 1 00:44:18.449802 kubelet[2501]: I1101 00:44:18.449467 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b8ff4200-8557-4abf-a40e-51e49af08b77-cni-log-dir\") pod \"calico-node-k97x5\" (UID: \"b8ff4200-8557-4abf-a40e-51e49af08b77\") " pod="calico-system/calico-node-k97x5" Nov 1 00:44:18.449802 kubelet[2501]: I1101 00:44:18.449548 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b8ff4200-8557-4abf-a40e-51e49af08b77-flexvol-driver-host\") pod \"calico-node-k97x5\" (UID: \"b8ff4200-8557-4abf-a40e-51e49af08b77\") " pod="calico-system/calico-node-k97x5" Nov 1 00:44:18.449802 kubelet[2501]: I1101 00:44:18.449595 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8ff4200-8557-4abf-a40e-51e49af08b77-lib-modules\") pod \"calico-node-k97x5\" (UID: \"b8ff4200-8557-4abf-a40e-51e49af08b77\") " pod="calico-system/calico-node-k97x5" Nov 1 00:44:18.449802 kubelet[2501]: I1101 00:44:18.449633 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b8ff4200-8557-4abf-a40e-51e49af08b77-var-run-calico\") pod \"calico-node-k97x5\" (UID: \"b8ff4200-8557-4abf-a40e-51e49af08b77\") " pod="calico-system/calico-node-k97x5" Nov 1 00:44:18.449802 kubelet[2501]: I1101 00:44:18.449715 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b8ff4200-8557-4abf-a40e-51e49af08b77-node-certs\") pod \"calico-node-k97x5\" (UID: \"b8ff4200-8557-4abf-a40e-51e49af08b77\") " pod="calico-system/calico-node-k97x5" Nov 1 00:44:18.450086 kubelet[2501]: I1101 00:44:18.449766 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2glhk\" (UniqueName: \"kubernetes.io/projected/b8ff4200-8557-4abf-a40e-51e49af08b77-kube-api-access-2glhk\") pod \"calico-node-k97x5\" (UID: \"b8ff4200-8557-4abf-a40e-51e49af08b77\") " pod="calico-system/calico-node-k97x5" Nov 1 00:44:18.508871 env[1565]: time="2025-11-01T00:44:18.508766499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f944f9fdf-nr2k9,Uid:bd444261-6ab3-4a08-babc-927b8a66329c,Namespace:calico-system,Attempt:0,}" Nov 1 00:44:18.532433 env[1565]: time="2025-11-01T00:44:18.532274978Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:44:18.532433 env[1565]: time="2025-11-01T00:44:18.532376412Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:44:18.532929 env[1565]: time="2025-11-01T00:44:18.532448539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:44:18.533113 env[1565]: time="2025-11-01T00:44:18.532978439Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/824bf44287674249db76af65924b797bd2db7f4672b87202d9c7daef37b1bfc8 pid=3039 runtime=io.containerd.runc.v2 Nov 1 00:44:18.554971 kubelet[2501]: E1101 00:44:18.554850 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.554971 kubelet[2501]: W1101 00:44:18.554894 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.554971 kubelet[2501]: E1101 00:44:18.554961 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.558670 kubelet[2501]: E1101 00:44:18.558609 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.558670 kubelet[2501]: W1101 00:44:18.558648 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.559171 kubelet[2501]: E1101 00:44:18.558683 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.564784 systemd[1]: Started cri-containerd-824bf44287674249db76af65924b797bd2db7f4672b87202d9c7daef37b1bfc8.scope. Nov 1 00:44:18.570359 kubelet[2501]: E1101 00:44:18.570327 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.570359 kubelet[2501]: W1101 00:44:18.570352 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.570631 kubelet[2501]: E1101 00:44:18.570380 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.580000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.580000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.580000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.580000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.580000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.580000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.580000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.580000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.580000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.581000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.581000 audit: BPF prog-id=115 op=LOAD Nov 1 00:44:18.582000 audit[3049]: AVC avc: denied { bpf } for pid=3049 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.582000 audit[3049]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=3039 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:18.582000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832346266343432383736373432343964623736616636353932346237 Nov 1 00:44:18.582000 audit[3049]: AVC avc: denied { perfmon } for pid=3049 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.582000 audit[3049]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=3039 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:18.582000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832346266343432383736373432343964623736616636353932346237 Nov 1 00:44:18.582000 audit[3049]: AVC avc: denied { bpf } for pid=3049 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.582000 audit[3049]: AVC avc: denied { bpf } for pid=3049 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.582000 audit[3049]: AVC avc: denied { bpf } for pid=3049 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.582000 audit[3049]: AVC avc: denied { perfmon } for pid=3049 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.582000 audit[3049]: AVC avc: denied { perfmon } for pid=3049 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.582000 audit[3049]: AVC avc: denied { perfmon } for pid=3049 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.582000 audit[3049]: AVC avc: denied { perfmon } for pid=3049 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.582000 audit[3049]: AVC avc: denied { perfmon } for pid=3049 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.582000 audit[3049]: AVC avc: denied { bpf } for pid=3049 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.582000 audit[3049]: AVC avc: denied { bpf } for pid=3049 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.582000 audit: BPF prog-id=116 op=LOAD Nov 1 00:44:18.582000 audit[3049]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001979d8 a2=78 a3=c000221cc0 items=0 ppid=3039 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:18.582000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832346266343432383736373432343964623736616636353932346237 Nov 1 00:44:18.582000 audit[3049]: AVC avc: denied { bpf } for pid=3049 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.582000 audit[3049]: AVC avc: denied { bpf } for pid=3049 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.582000 audit[3049]: AVC avc: denied { perfmon } for pid=3049 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.582000 audit[3049]: AVC avc: denied { perfmon } for pid=3049 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.582000 audit[3049]: AVC avc: denied { perfmon } for pid=3049 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.582000 audit[3049]: AVC avc: denied { perfmon } for pid=3049 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.582000 audit[3049]: AVC avc: denied { perfmon } for pid=3049 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.582000 audit[3049]: AVC avc: denied { bpf } for pid=3049 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.582000 audit[3049]: AVC avc: denied { bpf } for pid=3049 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.582000 audit: BPF prog-id=117 op=LOAD Nov 1 00:44:18.582000 audit[3049]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000197770 a2=78 a3=c000221d08 items=0 ppid=3039 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:18.582000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832346266343432383736373432343964623736616636353932346237 Nov 1 00:44:18.582000 audit: BPF prog-id=117 op=UNLOAD Nov 1 00:44:18.582000 audit: BPF prog-id=116 op=UNLOAD Nov 1 00:44:18.582000 audit[3049]: AVC avc: denied { bpf } for pid=3049 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.582000 audit[3049]: AVC avc: denied { bpf } for pid=3049 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.582000 audit[3049]: AVC avc: denied { bpf } for pid=3049 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.582000 audit[3049]: AVC avc: denied { perfmon } for pid=3049 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.582000 audit[3049]: AVC avc: denied { perfmon } for pid=3049 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.582000 audit[3049]: AVC avc: denied { perfmon } for pid=3049 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.582000 audit[3049]: AVC avc: denied { perfmon } for pid=3049 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.582000 audit[3049]: AVC avc: denied { perfmon } for pid=3049 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.582000 audit[3049]: AVC avc: denied { bpf } for pid=3049 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.582000 audit[3049]: AVC avc: denied { bpf } for pid=3049 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.582000 audit: BPF prog-id=118 op=LOAD Nov 1 00:44:18.582000 audit[3049]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000197c30 a2=78 a3=c00041e118 items=0 ppid=3039 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:18.582000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832346266343432383736373432343964623736616636353932346237 Nov 1 00:44:18.608801 kubelet[2501]: E1101 00:44:18.608761 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qpkjg" podUID="ce15fc81-d33c-45b3-b08a-5d312fb076f0" Nov 1 00:44:18.626103 env[1565]: time="2025-11-01T00:44:18.626075038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f944f9fdf-nr2k9,Uid:bd444261-6ab3-4a08-babc-927b8a66329c,Namespace:calico-system,Attempt:0,} returns sandbox id \"824bf44287674249db76af65924b797bd2db7f4672b87202d9c7daef37b1bfc8\"" Nov 1 00:44:18.626807 env[1565]: time="2025-11-01T00:44:18.626791084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 1 00:44:18.637608 kubelet[2501]: E1101 00:44:18.637597 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.637608 kubelet[2501]: W1101 00:44:18.637607 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.637683 kubelet[2501]: E1101 00:44:18.637618 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.637777 kubelet[2501]: E1101 00:44:18.637771 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.637798 kubelet[2501]: W1101 00:44:18.637777 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.637798 kubelet[2501]: E1101 00:44:18.637784 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.637938 kubelet[2501]: E1101 00:44:18.637932 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.637962 kubelet[2501]: W1101 00:44:18.637939 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.637962 kubelet[2501]: E1101 00:44:18.637946 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.638099 kubelet[2501]: E1101 00:44:18.638091 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.638099 kubelet[2501]: W1101 00:44:18.638098 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.638145 kubelet[2501]: E1101 00:44:18.638104 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.638229 kubelet[2501]: E1101 00:44:18.638224 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.638251 kubelet[2501]: W1101 00:44:18.638229 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.638251 kubelet[2501]: E1101 00:44:18.638234 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.638307 kubelet[2501]: E1101 00:44:18.638302 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.638328 kubelet[2501]: W1101 00:44:18.638307 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.638328 kubelet[2501]: E1101 00:44:18.638312 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.638381 kubelet[2501]: E1101 00:44:18.638377 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.638403 kubelet[2501]: W1101 00:44:18.638381 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.638403 kubelet[2501]: E1101 00:44:18.638386 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.638455 kubelet[2501]: E1101 00:44:18.638450 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.638475 kubelet[2501]: W1101 00:44:18.638455 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.638475 kubelet[2501]: E1101 00:44:18.638459 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.638534 kubelet[2501]: E1101 00:44:18.638529 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.638534 kubelet[2501]: W1101 00:44:18.638534 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.638578 kubelet[2501]: E1101 00:44:18.638538 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.638650 kubelet[2501]: E1101 00:44:18.638646 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.638672 kubelet[2501]: W1101 00:44:18.638650 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.638672 kubelet[2501]: E1101 00:44:18.638654 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.638720 kubelet[2501]: E1101 00:44:18.638716 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.638740 kubelet[2501]: W1101 00:44:18.638720 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.638740 kubelet[2501]: E1101 00:44:18.638725 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.638792 kubelet[2501]: E1101 00:44:18.638788 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.638813 kubelet[2501]: W1101 00:44:18.638792 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.638813 kubelet[2501]: E1101 00:44:18.638796 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.638868 kubelet[2501]: E1101 00:44:18.638863 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.638887 kubelet[2501]: W1101 00:44:18.638868 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.638887 kubelet[2501]: E1101 00:44:18.638872 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.638950 kubelet[2501]: E1101 00:44:18.638946 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.638972 kubelet[2501]: W1101 00:44:18.638951 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.638972 kubelet[2501]: E1101 00:44:18.638955 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.639021 kubelet[2501]: E1101 00:44:18.639017 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.639044 kubelet[2501]: W1101 00:44:18.639021 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.639044 kubelet[2501]: E1101 00:44:18.639026 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.639092 kubelet[2501]: E1101 00:44:18.639088 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.639115 kubelet[2501]: W1101 00:44:18.639092 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.639115 kubelet[2501]: E1101 00:44:18.639097 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.639167 kubelet[2501]: E1101 00:44:18.639163 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.639187 kubelet[2501]: W1101 00:44:18.639167 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.639187 kubelet[2501]: E1101 00:44:18.639172 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.639236 kubelet[2501]: E1101 00:44:18.639232 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.639256 kubelet[2501]: W1101 00:44:18.639236 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.639256 kubelet[2501]: E1101 00:44:18.639240 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.639306 kubelet[2501]: E1101 00:44:18.639301 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.639329 kubelet[2501]: W1101 00:44:18.639306 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.639329 kubelet[2501]: E1101 00:44:18.639310 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.639376 kubelet[2501]: E1101 00:44:18.639372 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.639398 kubelet[2501]: W1101 00:44:18.639376 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.639398 kubelet[2501]: E1101 00:44:18.639380 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.652213 kubelet[2501]: E1101 00:44:18.652134 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.652213 kubelet[2501]: W1101 00:44:18.652166 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.652213 kubelet[2501]: E1101 00:44:18.652196 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.652666 kubelet[2501]: I1101 00:44:18.652259 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ce15fc81-d33c-45b3-b08a-5d312fb076f0-socket-dir\") pod \"csi-node-driver-qpkjg\" (UID: \"ce15fc81-d33c-45b3-b08a-5d312fb076f0\") " pod="calico-system/csi-node-driver-qpkjg" Nov 1 00:44:18.652790 kubelet[2501]: E1101 00:44:18.652693 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.652790 kubelet[2501]: W1101 00:44:18.652722 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.652790 kubelet[2501]: E1101 00:44:18.652755 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.653101 kubelet[2501]: I1101 00:44:18.652797 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ce15fc81-d33c-45b3-b08a-5d312fb076f0-varrun\") pod \"csi-node-driver-qpkjg\" (UID: \"ce15fc81-d33c-45b3-b08a-5d312fb076f0\") " pod="calico-system/csi-node-driver-qpkjg" Nov 1 00:44:18.653437 kubelet[2501]: E1101 00:44:18.653365 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.653437 kubelet[2501]: W1101 00:44:18.653398 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.653749 kubelet[2501]: E1101 00:44:18.653442 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.653997 kubelet[2501]: E1101 00:44:18.653906 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.653997 kubelet[2501]: W1101 00:44:18.653933 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.653997 kubelet[2501]: E1101 00:44:18.653965 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.654423 kubelet[2501]: E1101 00:44:18.654396 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.654423 kubelet[2501]: W1101 00:44:18.654423 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.654682 kubelet[2501]: E1101 00:44:18.654454 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.654682 kubelet[2501]: I1101 00:44:18.654539 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4wc8\" (UniqueName: \"kubernetes.io/projected/ce15fc81-d33c-45b3-b08a-5d312fb076f0-kube-api-access-l4wc8\") pod \"csi-node-driver-qpkjg\" (UID: \"ce15fc81-d33c-45b3-b08a-5d312fb076f0\") " pod="calico-system/csi-node-driver-qpkjg" Nov 1 00:44:18.655164 kubelet[2501]: E1101 00:44:18.655089 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.655164 kubelet[2501]: W1101 00:44:18.655126 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.655164 kubelet[2501]: E1101 00:44:18.655172 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.655603 kubelet[2501]: E1101 00:44:18.655573 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.655603 kubelet[2501]: W1101 00:44:18.655598 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.655817 kubelet[2501]: E1101 00:44:18.655628 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.656176 kubelet[2501]: E1101 00:44:18.656103 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.656176 kubelet[2501]: W1101 00:44:18.656137 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.656176 kubelet[2501]: E1101 00:44:18.656177 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.656597 kubelet[2501]: I1101 00:44:18.656228 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ce15fc81-d33c-45b3-b08a-5d312fb076f0-kubelet-dir\") pod \"csi-node-driver-qpkjg\" (UID: \"ce15fc81-d33c-45b3-b08a-5d312fb076f0\") " pod="calico-system/csi-node-driver-qpkjg" Nov 1 00:44:18.656844 kubelet[2501]: E1101 00:44:18.656754 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.656844 kubelet[2501]: W1101 00:44:18.656791 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.656844 kubelet[2501]: E1101 00:44:18.656829 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.657321 kubelet[2501]: E1101 00:44:18.657222 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.657321 kubelet[2501]: W1101 00:44:18.657247 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.657321 kubelet[2501]: E1101 00:44:18.657279 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.657802 kubelet[2501]: E1101 00:44:18.657754 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.657802 kubelet[2501]: W1101 00:44:18.657779 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.658027 kubelet[2501]: E1101 00:44:18.657811 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.658027 kubelet[2501]: I1101 00:44:18.657860 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ce15fc81-d33c-45b3-b08a-5d312fb076f0-registration-dir\") pod \"csi-node-driver-qpkjg\" (UID: \"ce15fc81-d33c-45b3-b08a-5d312fb076f0\") " pod="calico-system/csi-node-driver-qpkjg" Nov 1 00:44:18.658461 kubelet[2501]: E1101 00:44:18.658398 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.658461 kubelet[2501]: W1101 00:44:18.658443 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.658756 kubelet[2501]: E1101 00:44:18.658492 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.659019 kubelet[2501]: E1101 00:44:18.658973 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.659019 kubelet[2501]: W1101 00:44:18.658998 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.659228 kubelet[2501]: E1101 00:44:18.659031 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.659601 kubelet[2501]: E1101 00:44:18.659550 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.659601 kubelet[2501]: W1101 00:44:18.659577 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.659833 kubelet[2501]: E1101 00:44:18.659603 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.660112 kubelet[2501]: E1101 00:44:18.660052 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.660112 kubelet[2501]: W1101 00:44:18.660077 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.660112 kubelet[2501]: E1101 00:44:18.660101 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.739940 env[1565]: time="2025-11-01T00:44:18.739748957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-k97x5,Uid:b8ff4200-8557-4abf-a40e-51e49af08b77,Namespace:calico-system,Attempt:0,}" Nov 1 00:44:18.759671 kubelet[2501]: E1101 00:44:18.759587 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.759671 kubelet[2501]: W1101 00:44:18.759627 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.759671 kubelet[2501]: E1101 00:44:18.759663 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.760287 kubelet[2501]: E1101 00:44:18.760225 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.760287 kubelet[2501]: W1101 00:44:18.760261 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.760572 kubelet[2501]: E1101 00:44:18.760297 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.761003 kubelet[2501]: E1101 00:44:18.760941 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.761003 kubelet[2501]: W1101 00:44:18.760978 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.761232 kubelet[2501]: E1101 00:44:18.761020 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.761577 kubelet[2501]: E1101 00:44:18.761538 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.761577 kubelet[2501]: W1101 00:44:18.761567 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.761952 kubelet[2501]: E1101 00:44:18.761644 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.762159 kubelet[2501]: E1101 00:44:18.762071 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.762159 kubelet[2501]: W1101 00:44:18.762103 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.762532 kubelet[2501]: E1101 00:44:18.762184 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.762696 env[1565]: time="2025-11-01T00:44:18.762046449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:44:18.762696 env[1565]: time="2025-11-01T00:44:18.762174741Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:44:18.762696 env[1565]: time="2025-11-01T00:44:18.762233402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:44:18.762999 kubelet[2501]: E1101 00:44:18.762629 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.762999 kubelet[2501]: W1101 00:44:18.762658 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.762999 kubelet[2501]: E1101 00:44:18.762752 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.763294 env[1565]: time="2025-11-01T00:44:18.762685910Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/453373e240104f07be56e55b6456f0f118330092894184896c8c77a1beb92421 pid=3118 runtime=io.containerd.runc.v2 Nov 1 00:44:18.763398 kubelet[2501]: E1101 00:44:18.763151 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.763398 kubelet[2501]: W1101 00:44:18.763176 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.763398 kubelet[2501]: E1101 00:44:18.763236 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.763714 kubelet[2501]: E1101 00:44:18.763653 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.763714 kubelet[2501]: W1101 00:44:18.763677 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.763917 kubelet[2501]: E1101 00:44:18.763765 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.764144 kubelet[2501]: E1101 00:44:18.764095 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.764144 kubelet[2501]: W1101 00:44:18.764122 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.764378 kubelet[2501]: E1101 00:44:18.764196 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.764644 kubelet[2501]: E1101 00:44:18.764615 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.764760 kubelet[2501]: W1101 00:44:18.764644 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.764760 kubelet[2501]: E1101 00:44:18.764715 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.765173 kubelet[2501]: E1101 00:44:18.765147 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.765283 kubelet[2501]: W1101 00:44:18.765174 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.765283 kubelet[2501]: E1101 00:44:18.765239 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.765715 kubelet[2501]: E1101 00:44:18.765688 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.765827 kubelet[2501]: W1101 00:44:18.765715 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.765827 kubelet[2501]: E1101 00:44:18.765783 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.766237 kubelet[2501]: E1101 00:44:18.766192 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.766237 kubelet[2501]: W1101 00:44:18.766216 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.766444 kubelet[2501]: E1101 00:44:18.766279 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.766723 kubelet[2501]: E1101 00:44:18.766676 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.766723 kubelet[2501]: W1101 00:44:18.766702 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.766958 kubelet[2501]: E1101 00:44:18.766769 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.767238 kubelet[2501]: E1101 00:44:18.767183 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.767238 kubelet[2501]: W1101 00:44:18.767211 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.767469 kubelet[2501]: E1101 00:44:18.767291 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.767746 kubelet[2501]: E1101 00:44:18.767699 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.767746 kubelet[2501]: W1101 00:44:18.767725 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.767977 kubelet[2501]: E1101 00:44:18.767790 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.768221 kubelet[2501]: E1101 00:44:18.768195 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.768329 kubelet[2501]: W1101 00:44:18.768221 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.768329 kubelet[2501]: E1101 00:44:18.768285 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.768705 kubelet[2501]: E1101 00:44:18.768658 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.768705 kubelet[2501]: W1101 00:44:18.768683 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.768956 kubelet[2501]: E1101 00:44:18.768749 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.769244 kubelet[2501]: E1101 00:44:18.769197 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.769244 kubelet[2501]: W1101 00:44:18.769234 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.769593 kubelet[2501]: E1101 00:44:18.769308 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.769811 kubelet[2501]: E1101 00:44:18.769778 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.769944 kubelet[2501]: W1101 00:44:18.769817 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.769944 kubelet[2501]: E1101 00:44:18.769896 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.770437 kubelet[2501]: E1101 00:44:18.770406 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.770585 kubelet[2501]: W1101 00:44:18.770443 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.770585 kubelet[2501]: E1101 00:44:18.770536 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.771150 kubelet[2501]: E1101 00:44:18.771083 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.771150 kubelet[2501]: W1101 00:44:18.771120 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.771409 kubelet[2501]: E1101 00:44:18.771206 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.771687 kubelet[2501]: E1101 00:44:18.771617 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.771687 kubelet[2501]: W1101 00:44:18.771658 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.771982 kubelet[2501]: E1101 00:44:18.771741 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.772365 kubelet[2501]: E1101 00:44:18.772310 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.772365 kubelet[2501]: W1101 00:44:18.772344 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.772684 kubelet[2501]: E1101 00:44:18.772384 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.773114 kubelet[2501]: E1101 00:44:18.773045 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.773114 kubelet[2501]: W1101 00:44:18.773072 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.773114 kubelet[2501]: E1101 00:44:18.773102 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.790586 systemd[1]: Started cri-containerd-453373e240104f07be56e55b6456f0f118330092894184896c8c77a1beb92421.scope. Nov 1 00:44:18.793951 kubelet[2501]: E1101 00:44:18.793875 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:18.793951 kubelet[2501]: W1101 00:44:18.793908 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:18.793951 kubelet[2501]: E1101 00:44:18.793936 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:18.803000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.803000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.803000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.803000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.803000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.803000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.803000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.803000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.803000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.804000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.804000 audit: BPF prog-id=119 op=LOAD Nov 1 00:44:18.804000 audit[3137]: AVC avc: denied { bpf } for pid=3137 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.804000 audit[3137]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000145c48 a2=10 a3=1c items=0 ppid=3118 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:18.804000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435333337336532343031303466303762653536653535623634353666 Nov 1 00:44:18.804000 audit[3137]: AVC avc: denied { perfmon } for pid=3137 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.804000 audit[3137]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001456b0 a2=3c a3=c items=0 ppid=3118 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:18.804000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435333337336532343031303466303762653536653535623634353666 Nov 1 00:44:18.804000 audit[3137]: AVC avc: denied { bpf } for pid=3137 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.804000 audit[3137]: AVC avc: denied { bpf } for pid=3137 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.804000 audit[3137]: AVC avc: denied { bpf } for pid=3137 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.804000 audit[3137]: AVC avc: denied { perfmon } for pid=3137 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.804000 audit[3137]: AVC avc: denied { perfmon } for pid=3137 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.804000 audit[3137]: AVC avc: denied { perfmon } for pid=3137 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.804000 audit[3137]: AVC avc: denied { perfmon } for pid=3137 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.804000 audit[3137]: AVC avc: denied { perfmon } for pid=3137 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.804000 audit[3137]: AVC avc: denied { bpf } for pid=3137 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.804000 audit[3137]: AVC avc: denied { bpf } for pid=3137 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.804000 audit: BPF prog-id=120 op=LOAD Nov 1 00:44:18.804000 audit[3137]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001459d8 a2=78 a3=c000386c20 items=0 ppid=3118 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:18.804000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435333337336532343031303466303762653536653535623634353666 Nov 1 00:44:18.804000 audit[3137]: AVC avc: denied { bpf } for pid=3137 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.804000 audit[3137]: AVC avc: denied { bpf } for pid=3137 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.804000 audit[3137]: AVC avc: denied { perfmon } for pid=3137 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.804000 audit[3137]: AVC avc: denied { perfmon } for pid=3137 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.804000 audit[3137]: AVC avc: denied { perfmon } for pid=3137 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.804000 audit[3137]: AVC avc: denied { perfmon } for pid=3137 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.804000 audit[3137]: AVC avc: denied { perfmon } for pid=3137 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.804000 audit[3137]: AVC avc: denied { bpf } for pid=3137 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.804000 audit[3137]: AVC avc: denied { bpf } for pid=3137 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.804000 audit: BPF prog-id=121 op=LOAD Nov 1 00:44:18.804000 audit[3137]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000145770 a2=78 a3=c000386c68 items=0 ppid=3118 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:18.804000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435333337336532343031303466303762653536653535623634353666 Nov 1 00:44:18.805000 audit: BPF prog-id=121 op=UNLOAD Nov 1 00:44:18.805000 audit: BPF prog-id=120 op=UNLOAD Nov 1 00:44:18.805000 audit[3137]: AVC avc: denied { bpf } for pid=3137 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.805000 audit[3137]: AVC avc: denied { bpf } for pid=3137 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.805000 audit[3137]: AVC avc: denied { bpf } for pid=3137 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.805000 audit[3137]: AVC avc: denied { perfmon } for pid=3137 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.805000 audit[3137]: AVC avc: denied { perfmon } for pid=3137 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.805000 audit[3137]: AVC avc: denied { perfmon } for pid=3137 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.805000 audit[3137]: AVC avc: denied { perfmon } for pid=3137 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.805000 audit[3137]: AVC avc: denied { perfmon } for pid=3137 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.805000 audit[3137]: AVC avc: denied { bpf } for pid=3137 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.805000 audit[3137]: AVC avc: denied { bpf } for pid=3137 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:18.805000 audit: BPF prog-id=122 op=LOAD Nov 1 00:44:18.805000 audit[3137]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000145c30 a2=78 a3=c000387078 items=0 ppid=3118 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:18.805000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435333337336532343031303466303762653536653535623634353666 Nov 1 00:44:18.818251 env[1565]: time="2025-11-01T00:44:18.818202287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-k97x5,Uid:b8ff4200-8557-4abf-a40e-51e49af08b77,Namespace:calico-system,Attempt:0,} returns sandbox id \"453373e240104f07be56e55b6456f0f118330092894184896c8c77a1beb92421\"" Nov 1 00:44:19.202000 audit[3180]: NETFILTER_CFG table=filter:99 family=2 entries=22 op=nft_register_rule pid=3180 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:19.244568 kernel: kauditd_printk_skb: 139 callbacks suppressed Nov 1 00:44:19.244666 kernel: audit: type=1325 audit(1761957859.202:954): table=filter:99 family=2 entries=22 op=nft_register_rule pid=3180 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:19.202000 audit[3180]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe6c73dc60 a2=0 a3=7ffe6c73dc4c items=0 ppid=2683 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:19.398063 kernel: audit: type=1300 audit(1761957859.202:954): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe6c73dc60 a2=0 a3=7ffe6c73dc4c items=0 ppid=2683 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:19.398114 kernel: audit: type=1327 audit(1761957859.202:954): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:19.202000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:19.458000 audit[3180]: NETFILTER_CFG table=nat:100 family=2 entries=12 op=nft_register_rule pid=3180 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:19.458000 audit[3180]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe6c73dc60 a2=0 a3=0 items=0 ppid=2683 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:19.612763 kernel: audit: type=1325 audit(1761957859.458:955): table=nat:100 family=2 entries=12 op=nft_register_rule pid=3180 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:19.612813 kernel: audit: type=1300 audit(1761957859.458:955): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe6c73dc60 a2=0 a3=0 items=0 ppid=2683 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:19.612826 kernel: audit: type=1327 audit(1761957859.458:955): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:19.458000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:20.047695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1707308205.mount: Deactivated successfully. Nov 1 00:44:20.529013 kubelet[2501]: E1101 00:44:20.528961 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qpkjg" podUID="ce15fc81-d33c-45b3-b08a-5d312fb076f0" Nov 1 00:44:21.105504 env[1565]: time="2025-11-01T00:44:21.105441385Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:44:21.106077 env[1565]: time="2025-11-01T00:44:21.106037471Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:44:21.106647 env[1565]: time="2025-11-01T00:44:21.106606701Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:44:21.107177 env[1565]: time="2025-11-01T00:44:21.107133298Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:44:21.107480 env[1565]: time="2025-11-01T00:44:21.107441779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 1 00:44:21.108255 env[1565]: time="2025-11-01T00:44:21.108241339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 1 00:44:21.112139 env[1565]: time="2025-11-01T00:44:21.112118506Z" level=info msg="CreateContainer within sandbox \"824bf44287674249db76af65924b797bd2db7f4672b87202d9c7daef37b1bfc8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 1 00:44:21.116655 env[1565]: time="2025-11-01T00:44:21.116636975Z" level=info msg="CreateContainer within sandbox \"824bf44287674249db76af65924b797bd2db7f4672b87202d9c7daef37b1bfc8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ff1a868f2293f4dc99b5a2479c6a6db1d00f751a4d8f5dbb33158c0fc023e5c7\"" Nov 1 00:44:21.116823 env[1565]: time="2025-11-01T00:44:21.116812164Z" level=info msg="StartContainer for \"ff1a868f2293f4dc99b5a2479c6a6db1d00f751a4d8f5dbb33158c0fc023e5c7\"" Nov 1 00:44:21.126413 systemd[1]: Started cri-containerd-ff1a868f2293f4dc99b5a2479c6a6db1d00f751a4d8f5dbb33158c0fc023e5c7.scope. Nov 1 00:44:21.130000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:21.130000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:21.260286 kernel: audit: type=1400 audit(1761957861.130:956): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:21.260333 kernel: audit: type=1400 audit(1761957861.130:957): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:21.260355 kernel: audit: type=1400 audit(1761957861.130:958): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:21.130000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:21.130000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:21.389379 kernel: audit: type=1400 audit(1761957861.130:959): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:21.130000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:21.130000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:21.130000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:21.130000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:21.130000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:21.258000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:21.258000 audit: BPF prog-id=123 op=LOAD Nov 1 00:44:21.259000 audit[3191]: AVC avc: denied { bpf } for pid=3191 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:21.259000 audit[3191]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=3039 pid=3191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:21.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666316138363866323239336634646339396235613234373963366136 Nov 1 00:44:21.259000 audit[3191]: AVC avc: denied { perfmon } for pid=3191 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:21.259000 audit[3191]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=8 items=0 ppid=3039 pid=3191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:21.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666316138363866323239336634646339396235613234373963366136 Nov 1 00:44:21.259000 audit[3191]: AVC avc: denied { bpf } for pid=3191 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:21.259000 audit[3191]: AVC avc: denied { bpf } for pid=3191 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:21.259000 audit[3191]: AVC avc: denied { bpf } for pid=3191 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:21.259000 audit[3191]: AVC avc: denied { perfmon } for pid=3191 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:21.259000 audit[3191]: AVC avc: denied { perfmon } for pid=3191 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:21.259000 audit[3191]: AVC avc: denied { perfmon } for pid=3191 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:21.259000 audit[3191]: AVC avc: denied { perfmon } for pid=3191 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:21.259000 audit[3191]: AVC avc: denied { perfmon } for pid=3191 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:21.259000 audit[3191]: AVC avc: denied { bpf } for pid=3191 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:21.259000 audit[3191]: AVC avc: denied { bpf } for pid=3191 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:21.259000 audit: BPF prog-id=124 op=LOAD Nov 1 00:44:21.259000 audit[3191]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c0002174d0 items=0 ppid=3039 pid=3191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:21.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666316138363866323239336634646339396235613234373963366136 Nov 1 00:44:21.387000 audit[3191]: AVC avc: denied { bpf } for pid=3191 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:21.387000 audit[3191]: AVC avc: denied { bpf } for pid=3191 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:21.387000 audit[3191]: AVC avc: denied { perfmon } for pid=3191 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:21.387000 audit[3191]: AVC avc: denied { perfmon } for pid=3191 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:21.387000 audit[3191]: AVC avc: denied { perfmon } for pid=3191 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:21.387000 audit[3191]: AVC avc: denied { perfmon } for pid=3191 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:21.387000 audit[3191]: AVC avc: denied { perfmon } for pid=3191 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:21.387000 audit[3191]: AVC avc: denied { bpf } for pid=3191 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:21.387000 audit[3191]: AVC avc: denied { bpf } for pid=3191 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:21.387000 audit: BPF prog-id=125 op=LOAD Nov 1 00:44:21.387000 audit[3191]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c000217518 items=0 ppid=3039 pid=3191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:21.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666316138363866323239336634646339396235613234373963366136 Nov 1 00:44:21.388000 audit: BPF prog-id=125 op=UNLOAD Nov 1 00:44:21.388000 audit: BPF prog-id=124 op=UNLOAD Nov 1 00:44:21.388000 audit[3191]: AVC avc: denied { bpf } for pid=3191 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:21.388000 audit[3191]: AVC avc: denied { bpf } for pid=3191 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:21.388000 audit[3191]: AVC avc: denied { bpf } for pid=3191 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:21.388000 audit[3191]: AVC avc: denied { perfmon } for pid=3191 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:21.388000 audit[3191]: AVC avc: denied { perfmon } for pid=3191 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:21.388000 audit[3191]: AVC avc: denied { perfmon } for pid=3191 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:21.388000 audit[3191]: AVC avc: denied { perfmon } for pid=3191 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:21.388000 audit[3191]: AVC avc: denied { perfmon } for pid=3191 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:21.388000 audit[3191]: AVC avc: denied { bpf } for pid=3191 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:21.388000 audit[3191]: AVC avc: denied { bpf } for pid=3191 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:21.388000 audit: BPF prog-id=126 op=LOAD Nov 1 00:44:21.388000 audit[3191]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c000217928 items=0 ppid=3039 pid=3191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:21.388000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666316138363866323239336634646339396235613234373963366136 Nov 1 00:44:21.406185 env[1565]: time="2025-11-01T00:44:21.406158882Z" level=info msg="StartContainer for \"ff1a868f2293f4dc99b5a2479c6a6db1d00f751a4d8f5dbb33158c0fc023e5c7\" returns successfully" Nov 1 00:44:21.594341 kubelet[2501]: I1101 00:44:21.594277 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6f944f9fdf-nr2k9" podStartSLOduration=1.112713574 podStartE2EDuration="3.594258608s" podCreationTimestamp="2025-11-01 00:44:18 +0000 UTC" firstStartedPulling="2025-11-01 00:44:18.626631818 +0000 UTC m=+18.151561834" lastFinishedPulling="2025-11-01 00:44:21.108176853 +0000 UTC m=+20.633106868" observedRunningTime="2025-11-01 00:44:21.593957753 +0000 UTC m=+21.118887797" watchObservedRunningTime="2025-11-01 00:44:21.594258608 +0000 UTC m=+21.119188632" Nov 1 00:44:21.609000 audit[3238]: NETFILTER_CFG table=filter:101 family=2 entries=21 op=nft_register_rule pid=3238 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:21.609000 audit[3238]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdb2275060 a2=0 a3=7ffdb227504c items=0 ppid=2683 pid=3238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:21.609000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:21.623000 audit[3238]: NETFILTER_CFG table=nat:102 family=2 entries=19 op=nft_register_chain pid=3238 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:21.623000 audit[3238]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffdb2275060 a2=0 a3=7ffdb227504c items=0 ppid=2683 pid=3238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:21.623000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:21.657811 kubelet[2501]: E1101 00:44:21.657714 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:21.657811 kubelet[2501]: W1101 00:44:21.657741 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:21.657811 kubelet[2501]: E1101 00:44:21.657774 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:21.658091 kubelet[2501]: E1101 00:44:21.658036 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:21.658091 kubelet[2501]: W1101 00:44:21.658054 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:21.658091 kubelet[2501]: E1101 00:44:21.658073 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:21.658355 kubelet[2501]: E1101 00:44:21.658336 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:21.658355 kubelet[2501]: W1101 00:44:21.658352 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:21.658546 kubelet[2501]: E1101 00:44:21.658371 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:21.658751 kubelet[2501]: E1101 00:44:21.658729 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:21.658751 kubelet[2501]: W1101 00:44:21.658748 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:21.658956 kubelet[2501]: E1101 00:44:21.658770 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:21.659075 kubelet[2501]: E1101 00:44:21.659056 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:21.659075 kubelet[2501]: W1101 00:44:21.659072 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:21.659234 kubelet[2501]: E1101 00:44:21.659091 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:21.659360 kubelet[2501]: E1101 00:44:21.659342 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:21.659360 kubelet[2501]: W1101 00:44:21.659357 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:21.659543 kubelet[2501]: E1101 00:44:21.659375 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:21.659662 kubelet[2501]: E1101 00:44:21.659644 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:21.659662 kubelet[2501]: W1101 00:44:21.659658 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:21.659819 kubelet[2501]: E1101 00:44:21.659676 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:21.659942 kubelet[2501]: E1101 00:44:21.659927 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:21.659942 kubelet[2501]: W1101 00:44:21.659941 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:21.660096 kubelet[2501]: E1101 00:44:21.659958 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:21.660211 kubelet[2501]: E1101 00:44:21.660196 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:21.660211 kubelet[2501]: W1101 00:44:21.660210 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:21.660394 kubelet[2501]: E1101 00:44:21.660227 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:21.660536 kubelet[2501]: E1101 00:44:21.660517 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:21.660536 kubelet[2501]: W1101 00:44:21.660535 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:21.660715 kubelet[2501]: E1101 00:44:21.660555 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:21.660817 kubelet[2501]: E1101 00:44:21.660807 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:21.660918 kubelet[2501]: W1101 00:44:21.660825 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:21.660918 kubelet[2501]: E1101 00:44:21.660846 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:21.661108 kubelet[2501]: E1101 00:44:21.661087 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:21.661108 kubelet[2501]: W1101 00:44:21.661105 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:21.661301 kubelet[2501]: E1101 00:44:21.661127 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:21.661439 kubelet[2501]: E1101 00:44:21.661420 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:21.661439 kubelet[2501]: W1101 00:44:21.661437 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:21.661626 kubelet[2501]: E1101 00:44:21.661460 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:21.661754 kubelet[2501]: E1101 00:44:21.661735 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:21.661754 kubelet[2501]: W1101 00:44:21.661752 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:21.661940 kubelet[2501]: E1101 00:44:21.661772 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:21.662082 kubelet[2501]: E1101 00:44:21.662064 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:21.662082 kubelet[2501]: W1101 00:44:21.662080 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:21.662241 kubelet[2501]: E1101 00:44:21.662099 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:21.684846 kubelet[2501]: E1101 00:44:21.684799 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:21.684846 kubelet[2501]: W1101 00:44:21.684837 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:21.685243 kubelet[2501]: E1101 00:44:21.684873 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:21.685441 kubelet[2501]: E1101 00:44:21.685411 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:21.685585 kubelet[2501]: W1101 00:44:21.685443 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:21.685585 kubelet[2501]: E1101 00:44:21.685478 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:21.686077 kubelet[2501]: E1101 00:44:21.686046 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:21.686215 kubelet[2501]: W1101 00:44:21.686124 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:21.686215 kubelet[2501]: E1101 00:44:21.686158 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:21.686734 kubelet[2501]: E1101 00:44:21.686689 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:21.686843 kubelet[2501]: W1101 00:44:21.686742 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:21.686843 kubelet[2501]: E1101 00:44:21.686800 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:21.687367 kubelet[2501]: E1101 00:44:21.687311 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:21.687367 kubelet[2501]: W1101 00:44:21.687338 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:21.687659 kubelet[2501]: E1101 00:44:21.687437 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:21.687763 kubelet[2501]: E1101 00:44:21.687735 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:21.687763 kubelet[2501]: W1101 00:44:21.687757 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:21.687954 kubelet[2501]: E1101 00:44:21.687859 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:21.688286 kubelet[2501]: E1101 00:44:21.688235 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:21.688286 kubelet[2501]: W1101 00:44:21.688257 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:21.688535 kubelet[2501]: E1101 00:44:21.688310 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:21.688724 kubelet[2501]: E1101 00:44:21.688673 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:21.688724 kubelet[2501]: W1101 00:44:21.688696 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:21.688950 kubelet[2501]: E1101 00:44:21.688728 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:21.689242 kubelet[2501]: E1101 00:44:21.689208 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:21.689242 kubelet[2501]: W1101 00:44:21.689230 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:21.689575 kubelet[2501]: E1101 00:44:21.689258 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:21.689842 kubelet[2501]: E1101 00:44:21.689779 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:21.689842 kubelet[2501]: W1101 00:44:21.689812 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:21.690062 kubelet[2501]: E1101 00:44:21.689846 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:21.690327 kubelet[2501]: E1101 00:44:21.690276 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:21.690327 kubelet[2501]: W1101 00:44:21.690299 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:21.690588 kubelet[2501]: E1101 00:44:21.690372 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:21.690786 kubelet[2501]: E1101 00:44:21.690740 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:21.690786 kubelet[2501]: W1101 00:44:21.690765 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:21.691021 kubelet[2501]: E1101 00:44:21.690836 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:21.691193 kubelet[2501]: E1101 00:44:21.691169 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:21.691320 kubelet[2501]: W1101 00:44:21.691192 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:21.691320 kubelet[2501]: E1101 00:44:21.691259 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:21.691704 kubelet[2501]: E1101 00:44:21.691680 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:21.691835 kubelet[2501]: W1101 00:44:21.691703 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:21.691835 kubelet[2501]: E1101 00:44:21.691735 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:21.692328 kubelet[2501]: E1101 00:44:21.692283 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:21.692538 kubelet[2501]: W1101 00:44:21.692326 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:21.692538 kubelet[2501]: E1101 00:44:21.692386 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:21.693015 kubelet[2501]: E1101 00:44:21.692982 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:21.693015 kubelet[2501]: W1101 00:44:21.693012 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:21.693339 kubelet[2501]: E1101 00:44:21.693059 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:21.693650 kubelet[2501]: E1101 00:44:21.693611 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:21.693650 kubelet[2501]: W1101 00:44:21.693642 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:21.693976 kubelet[2501]: E1101 00:44:21.693672 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:21.694164 kubelet[2501]: E1101 00:44:21.694130 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:44:21.694164 kubelet[2501]: W1101 00:44:21.694162 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:44:21.694369 kubelet[2501]: E1101 00:44:21.694190 2501 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:44:22.421381 env[1565]: time="2025-11-01T00:44:22.421354194Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:44:22.422009 env[1565]: time="2025-11-01T00:44:22.421998511Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:44:22.422586 env[1565]: time="2025-11-01T00:44:22.422572520Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:44:22.423221 env[1565]: time="2025-11-01T00:44:22.423210853Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:44:22.423848 env[1565]: time="2025-11-01T00:44:22.423803442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 1 00:44:22.424736 env[1565]: time="2025-11-01T00:44:22.424695519Z" level=info msg="CreateContainer within sandbox \"453373e240104f07be56e55b6456f0f118330092894184896c8c77a1beb92421\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 1 00:44:22.429505 env[1565]: time="2025-11-01T00:44:22.429480192Z" level=info msg="CreateContainer within sandbox \"453373e240104f07be56e55b6456f0f118330092894184896c8c77a1beb92421\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1adc1a7a1fa37ebc9b52464c7ca3943a90f2589556f21978b4e5763f3e4ba10d\"" Nov 1 00:44:22.429767 env[1565]: time="2025-11-01T00:44:22.429755181Z" level=info msg="StartContainer for \"1adc1a7a1fa37ebc9b52464c7ca3943a90f2589556f21978b4e5763f3e4ba10d\"" Nov 1 00:44:22.439163 systemd[1]: Started cri-containerd-1adc1a7a1fa37ebc9b52464c7ca3943a90f2589556f21978b4e5763f3e4ba10d.scope. Nov 1 00:44:22.444000 audit[3279]: AVC avc: denied { perfmon } for pid=3279 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:22.444000 audit[3279]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=7fd966edb2c8 items=0 ppid=3118 pid=3279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:22.444000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161646331613761316661333765626339623532343634633763613339 Nov 1 00:44:22.444000 audit[3279]: AVC avc: denied { bpf } for pid=3279 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:22.444000 audit[3279]: AVC avc: denied { bpf } for pid=3279 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:22.444000 audit[3279]: AVC avc: denied { bpf } for pid=3279 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:22.444000 audit[3279]: AVC avc: denied { perfmon } for pid=3279 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:22.444000 audit[3279]: AVC avc: denied { perfmon } for pid=3279 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:22.444000 audit[3279]: AVC avc: denied { perfmon } for pid=3279 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:22.444000 audit[3279]: AVC avc: denied { perfmon } for pid=3279 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:22.444000 audit[3279]: AVC avc: denied { perfmon } for pid=3279 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:22.444000 audit[3279]: AVC avc: denied { bpf } for pid=3279 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:22.444000 audit[3279]: AVC avc: denied { bpf } for pid=3279 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:22.444000 audit: BPF prog-id=127 op=LOAD Nov 1 00:44:22.444000 audit[3279]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001979d8 a2=78 a3=c0000f3c58 items=0 ppid=3118 pid=3279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:22.444000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161646331613761316661333765626339623532343634633763613339 Nov 1 00:44:22.444000 audit[3279]: AVC avc: denied { bpf } for pid=3279 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:22.444000 audit[3279]: AVC avc: denied { bpf } for pid=3279 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:22.444000 audit[3279]: AVC avc: denied { perfmon } for pid=3279 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:22.444000 audit[3279]: AVC avc: denied { perfmon } for pid=3279 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:22.444000 audit[3279]: AVC avc: denied { perfmon } for pid=3279 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:22.444000 audit[3279]: AVC avc: denied { perfmon } for pid=3279 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:22.444000 audit[3279]: AVC avc: denied { perfmon } for pid=3279 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:22.444000 audit[3279]: AVC avc: denied { bpf } for pid=3279 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:22.444000 audit[3279]: AVC avc: denied { bpf } for pid=3279 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:22.444000 audit: BPF prog-id=128 op=LOAD Nov 1 00:44:22.444000 audit[3279]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000197770 a2=78 a3=c0000f3ca8 items=0 ppid=3118 pid=3279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:22.444000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161646331613761316661333765626339623532343634633763613339 Nov 1 00:44:22.444000 audit: BPF prog-id=128 op=UNLOAD Nov 1 00:44:22.444000 audit: BPF prog-id=127 op=UNLOAD Nov 1 00:44:22.444000 audit[3279]: AVC avc: denied { bpf } for pid=3279 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:22.444000 audit[3279]: AVC avc: denied { bpf } for pid=3279 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:22.444000 audit[3279]: AVC avc: denied { bpf } for pid=3279 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:22.444000 audit[3279]: AVC avc: denied { perfmon } for pid=3279 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:22.444000 audit[3279]: AVC avc: denied { perfmon } for pid=3279 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:22.444000 audit[3279]: AVC avc: denied { perfmon } for pid=3279 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:22.444000 audit[3279]: AVC avc: denied { perfmon } for pid=3279 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:22.444000 audit[3279]: AVC avc: denied { perfmon } for pid=3279 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:22.444000 audit[3279]: AVC avc: denied { bpf } for pid=3279 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:22.444000 audit[3279]: AVC avc: denied { bpf } for pid=3279 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:22.444000 audit: BPF prog-id=129 op=LOAD Nov 1 00:44:22.444000 audit[3279]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000197c30 a2=78 a3=c0000f3d38 items=0 ppid=3118 pid=3279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:22.444000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161646331613761316661333765626339623532343634633763613339 Nov 1 00:44:22.451920 env[1565]: time="2025-11-01T00:44:22.451865262Z" level=info msg="StartContainer for \"1adc1a7a1fa37ebc9b52464c7ca3943a90f2589556f21978b4e5763f3e4ba10d\" returns successfully" Nov 1 00:44:22.457092 systemd[1]: cri-containerd-1adc1a7a1fa37ebc9b52464c7ca3943a90f2589556f21978b4e5763f3e4ba10d.scope: Deactivated successfully. Nov 1 00:44:22.467813 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1adc1a7a1fa37ebc9b52464c7ca3943a90f2589556f21978b4e5763f3e4ba10d-rootfs.mount: Deactivated successfully. Nov 1 00:44:22.467000 audit: BPF prog-id=129 op=UNLOAD Nov 1 00:44:22.529999 kubelet[2501]: E1101 00:44:22.529910 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qpkjg" podUID="ce15fc81-d33c-45b3-b08a-5d312fb076f0" Nov 1 00:44:22.944116 env[1565]: time="2025-11-01T00:44:22.944025921Z" level=info msg="shim disconnected" id=1adc1a7a1fa37ebc9b52464c7ca3943a90f2589556f21978b4e5763f3e4ba10d Nov 1 00:44:22.944526 env[1565]: time="2025-11-01T00:44:22.944120730Z" level=warning msg="cleaning up after shim disconnected" id=1adc1a7a1fa37ebc9b52464c7ca3943a90f2589556f21978b4e5763f3e4ba10d namespace=k8s.io Nov 1 00:44:22.944526 env[1565]: time="2025-11-01T00:44:22.944151298Z" level=info msg="cleaning up dead shim" Nov 1 00:44:22.960245 env[1565]: time="2025-11-01T00:44:22.960158856Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:44:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3322 runtime=io.containerd.runc.v2\n" Nov 1 00:44:23.596814 env[1565]: time="2025-11-01T00:44:23.596729822Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 1 00:44:24.530259 kubelet[2501]: E1101 00:44:24.530132 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qpkjg" podUID="ce15fc81-d33c-45b3-b08a-5d312fb076f0" Nov 1 00:44:26.528993 kubelet[2501]: E1101 00:44:26.528941 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qpkjg" podUID="ce15fc81-d33c-45b3-b08a-5d312fb076f0" Nov 1 00:44:27.071361 env[1565]: time="2025-11-01T00:44:27.071334623Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:44:27.071896 env[1565]: time="2025-11-01T00:44:27.071858690Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:44:27.072646 env[1565]: time="2025-11-01T00:44:27.072625056Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:44:27.073613 env[1565]: time="2025-11-01T00:44:27.073598496Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:44:27.073855 env[1565]: time="2025-11-01T00:44:27.073841062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 1 00:44:27.075291 env[1565]: time="2025-11-01T00:44:27.075258373Z" level=info msg="CreateContainer within sandbox \"453373e240104f07be56e55b6456f0f118330092894184896c8c77a1beb92421\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 1 00:44:27.080649 env[1565]: time="2025-11-01T00:44:27.080607296Z" level=info msg="CreateContainer within sandbox \"453373e240104f07be56e55b6456f0f118330092894184896c8c77a1beb92421\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"15bce36ac6ef77215a1f871c92cdd9ce535587e5f831cc7ad0771a30450356ae\"" Nov 1 00:44:27.080868 env[1565]: time="2025-11-01T00:44:27.080839156Z" level=info msg="StartContainer for \"15bce36ac6ef77215a1f871c92cdd9ce535587e5f831cc7ad0771a30450356ae\"" Nov 1 00:44:27.091602 systemd[1]: Started cri-containerd-15bce36ac6ef77215a1f871c92cdd9ce535587e5f831cc7ad0771a30450356ae.scope. Nov 1 00:44:27.097000 audit[3341]: AVC avc: denied { perfmon } for pid=3341 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:27.125237 kernel: kauditd_printk_skb: 103 callbacks suppressed Nov 1 00:44:27.125283 kernel: audit: type=1400 audit(1761957867.097:983): avc: denied { perfmon } for pid=3341 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:27.097000 audit[3341]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001476b0 a2=3c a3=7f71d4092878 items=0 ppid=3118 pid=3341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:27.287679 kernel: audit: type=1300 audit(1761957867.097:983): arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001476b0 a2=3c a3=7f71d4092878 items=0 ppid=3118 pid=3341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:27.287718 kernel: audit: type=1327 audit(1761957867.097:983): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135626365333661633665663737323135613166383731633932636464 Nov 1 00:44:27.097000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135626365333661633665663737323135613166383731633932636464 Nov 1 00:44:27.097000 audit[3341]: AVC avc: denied { bpf } for pid=3341 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:27.445620 kernel: audit: type=1400 audit(1761957867.097:984): avc: denied { bpf } for pid=3341 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:27.445655 kernel: audit: type=1400 audit(1761957867.097:984): avc: denied { bpf } for pid=3341 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:27.097000 audit[3341]: AVC avc: denied { bpf } for pid=3341 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:27.097000 audit[3341]: AVC avc: denied { bpf } for pid=3341 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:27.573625 kernel: audit: type=1400 audit(1761957867.097:984): avc: denied { bpf } for pid=3341 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:27.573658 kernel: audit: type=1400 audit(1761957867.097:984): avc: denied { perfmon } for pid=3341 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:27.097000 audit[3341]: AVC avc: denied { perfmon } for pid=3341 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:27.579401 env[1565]: time="2025-11-01T00:44:27.579374482Z" level=info msg="StartContainer for \"15bce36ac6ef77215a1f871c92cdd9ce535587e5f831cc7ad0771a30450356ae\" returns successfully" Nov 1 00:44:27.097000 audit[3341]: AVC avc: denied { perfmon } for pid=3341 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:27.702263 kernel: audit: type=1400 audit(1761957867.097:984): avc: denied { perfmon } for pid=3341 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:27.702305 kernel: audit: type=1400 audit(1761957867.097:984): avc: denied { perfmon } for pid=3341 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:27.097000 audit[3341]: AVC avc: denied { perfmon } for pid=3341 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:27.766571 kernel: audit: type=1400 audit(1761957867.097:984): avc: denied { perfmon } for pid=3341 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:27.097000 audit[3341]: AVC avc: denied { perfmon } for pid=3341 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:27.097000 audit[3341]: AVC avc: denied { perfmon } for pid=3341 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:27.097000 audit[3341]: AVC avc: denied { bpf } for pid=3341 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:27.097000 audit[3341]: AVC avc: denied { bpf } for pid=3341 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:27.097000 audit: BPF prog-id=130 op=LOAD Nov 1 00:44:27.097000 audit[3341]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001479d8 a2=78 a3=c0003c6468 items=0 ppid=3118 pid=3341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:27.097000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135626365333661633665663737323135613166383731633932636464 Nov 1 00:44:27.189000 audit[3341]: AVC avc: denied { bpf } for pid=3341 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:27.189000 audit[3341]: AVC avc: denied { bpf } for pid=3341 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:27.189000 audit[3341]: AVC avc: denied { perfmon } for pid=3341 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:27.189000 audit[3341]: AVC avc: denied { perfmon } for pid=3341 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:27.189000 audit[3341]: AVC avc: denied { perfmon } for pid=3341 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:27.189000 audit[3341]: AVC avc: denied { perfmon } for pid=3341 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:27.189000 audit[3341]: AVC avc: denied { perfmon } for pid=3341 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:27.189000 audit[3341]: AVC avc: denied { bpf } for pid=3341 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:27.189000 audit[3341]: AVC avc: denied { bpf } for pid=3341 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:27.189000 audit: BPF prog-id=131 op=LOAD Nov 1 00:44:27.189000 audit[3341]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000147770 a2=78 a3=c0003c64b8 items=0 ppid=3118 pid=3341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:27.189000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135626365333661633665663737323135613166383731633932636464 Nov 1 00:44:27.381000 audit: BPF prog-id=131 op=UNLOAD Nov 1 00:44:27.381000 audit: BPF prog-id=130 op=UNLOAD Nov 1 00:44:27.381000 audit[3341]: AVC avc: denied { bpf } for pid=3341 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:27.381000 audit[3341]: AVC avc: denied { bpf } for pid=3341 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:27.381000 audit[3341]: AVC avc: denied { bpf } for pid=3341 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:27.381000 audit[3341]: AVC avc: denied { perfmon } for pid=3341 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:27.381000 audit[3341]: AVC avc: denied { perfmon } for pid=3341 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:27.381000 audit[3341]: AVC avc: denied { perfmon } for pid=3341 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:27.381000 audit[3341]: AVC avc: denied { perfmon } for pid=3341 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:27.381000 audit[3341]: AVC avc: denied { perfmon } for pid=3341 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:27.381000 audit[3341]: AVC avc: denied { bpf } for pid=3341 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:27.381000 audit[3341]: AVC avc: denied { bpf } for pid=3341 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:27.381000 audit: BPF prog-id=132 op=LOAD Nov 1 00:44:27.381000 audit[3341]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000147c30 a2=78 a3=c0003c6548 items=0 ppid=3118 pid=3341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:27.381000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135626365333661633665663737323135613166383731633932636464 Nov 1 00:44:28.381335 env[1565]: time="2025-11-01T00:44:28.381173405Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:44:28.385656 systemd[1]: cri-containerd-15bce36ac6ef77215a1f871c92cdd9ce535587e5f831cc7ad0771a30450356ae.scope: Deactivated successfully. Nov 1 00:44:28.401000 audit: BPF prog-id=132 op=UNLOAD Nov 1 00:44:28.403689 kubelet[2501]: I1101 00:44:28.403588 2501 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 1 00:44:28.430956 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15bce36ac6ef77215a1f871c92cdd9ce535587e5f831cc7ad0771a30450356ae-rootfs.mount: Deactivated successfully. Nov 1 00:44:28.467398 systemd[1]: Created slice kubepods-burstable-pod027d8f2f_3806_4651_bd8a_463d116e5266.slice. Nov 1 00:44:28.475226 systemd[1]: Created slice kubepods-besteffort-pod9cd09edd_44db_4b31_b369_b622badfedc3.slice. Nov 1 00:44:28.479796 systemd[1]: Created slice kubepods-burstable-pod2eed7e34_9d6a_4f7a_a712_82a7a599696d.slice. Nov 1 00:44:28.485321 systemd[1]: Created slice kubepods-besteffort-pod2f6bb8ca_d7c7_4d64_919c_e85097fdc068.slice. Nov 1 00:44:28.488651 systemd[1]: Created slice kubepods-besteffort-podaf4cf953_58b5_4727_a1f5_dcd340748032.slice. Nov 1 00:44:28.491951 systemd[1]: Created slice kubepods-besteffort-pod135646f8_0c66_45b5_80ce_9bb45c825de7.slice. Nov 1 00:44:28.494730 systemd[1]: Created slice kubepods-besteffort-pod6daa08f9_a020_4d78_b34b_6c75d3bd1afe.slice. Nov 1 00:44:28.537874 kubelet[2501]: I1101 00:44:28.537778 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6daa08f9-a020-4d78-b34b-6c75d3bd1afe-whisker-backend-key-pair\") pod \"whisker-6897698c76-c6xgv\" (UID: \"6daa08f9-a020-4d78-b34b-6c75d3bd1afe\") " pod="calico-system/whisker-6897698c76-c6xgv" Nov 1 00:44:28.538201 kubelet[2501]: I1101 00:44:28.537884 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6daa08f9-a020-4d78-b34b-6c75d3bd1afe-whisker-ca-bundle\") pod \"whisker-6897698c76-c6xgv\" (UID: \"6daa08f9-a020-4d78-b34b-6c75d3bd1afe\") " pod="calico-system/whisker-6897698c76-c6xgv" Nov 1 00:44:28.538201 kubelet[2501]: I1101 00:44:28.537934 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bvfj\" (UniqueName: \"kubernetes.io/projected/6daa08f9-a020-4d78-b34b-6c75d3bd1afe-kube-api-access-6bvfj\") pod \"whisker-6897698c76-c6xgv\" (UID: \"6daa08f9-a020-4d78-b34b-6c75d3bd1afe\") " pod="calico-system/whisker-6897698c76-c6xgv" Nov 1 00:44:28.538201 kubelet[2501]: I1101 00:44:28.537980 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/027d8f2f-3806-4651-bd8a-463d116e5266-config-volume\") pod \"coredns-668d6bf9bc-4lsqg\" (UID: \"027d8f2f-3806-4651-bd8a-463d116e5266\") " pod="kube-system/coredns-668d6bf9bc-4lsqg" Nov 1 00:44:28.538201 kubelet[2501]: I1101 00:44:28.538033 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2eed7e34-9d6a-4f7a-a712-82a7a599696d-config-volume\") pod \"coredns-668d6bf9bc-86gd2\" (UID: \"2eed7e34-9d6a-4f7a-a712-82a7a599696d\") " pod="kube-system/coredns-668d6bf9bc-86gd2" Nov 1 00:44:28.538201 kubelet[2501]: I1101 00:44:28.538079 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p47qt\" (UniqueName: \"kubernetes.io/projected/9cd09edd-44db-4b31-b369-b622badfedc3-kube-api-access-p47qt\") pod \"calico-kube-controllers-859bf66984-8h8hn\" (UID: \"9cd09edd-44db-4b31-b369-b622badfedc3\") " pod="calico-system/calico-kube-controllers-859bf66984-8h8hn" Nov 1 00:44:28.538786 kubelet[2501]: I1101 00:44:28.538130 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/135646f8-0c66-45b5-80ce-9bb45c825de7-goldmane-key-pair\") pod \"goldmane-666569f655-bxfpm\" (UID: \"135646f8-0c66-45b5-80ce-9bb45c825de7\") " pod="calico-system/goldmane-666569f655-bxfpm" Nov 1 00:44:28.538786 kubelet[2501]: I1101 00:44:28.538175 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pvv5\" (UniqueName: \"kubernetes.io/projected/2f6bb8ca-d7c7-4d64-919c-e85097fdc068-kube-api-access-8pvv5\") pod \"calico-apiserver-5d69b6c6c-s7kdv\" (UID: \"2f6bb8ca-d7c7-4d64-919c-e85097fdc068\") " pod="calico-apiserver/calico-apiserver-5d69b6c6c-s7kdv" Nov 1 00:44:28.538786 kubelet[2501]: I1101 00:44:28.538220 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfh6h\" (UniqueName: \"kubernetes.io/projected/135646f8-0c66-45b5-80ce-9bb45c825de7-kube-api-access-vfh6h\") pod \"goldmane-666569f655-bxfpm\" (UID: \"135646f8-0c66-45b5-80ce-9bb45c825de7\") " pod="calico-system/goldmane-666569f655-bxfpm" Nov 1 00:44:28.538786 kubelet[2501]: I1101 00:44:28.538269 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdwv8\" (UniqueName: \"kubernetes.io/projected/027d8f2f-3806-4651-bd8a-463d116e5266-kube-api-access-sdwv8\") pod \"coredns-668d6bf9bc-4lsqg\" (UID: \"027d8f2f-3806-4651-bd8a-463d116e5266\") " pod="kube-system/coredns-668d6bf9bc-4lsqg" Nov 1 00:44:28.538786 kubelet[2501]: I1101 00:44:28.538313 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2f6bb8ca-d7c7-4d64-919c-e85097fdc068-calico-apiserver-certs\") pod \"calico-apiserver-5d69b6c6c-s7kdv\" (UID: \"2f6bb8ca-d7c7-4d64-919c-e85097fdc068\") " pod="calico-apiserver/calico-apiserver-5d69b6c6c-s7kdv" Nov 1 00:44:28.539303 kubelet[2501]: I1101 00:44:28.538442 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9cd09edd-44db-4b31-b369-b622badfedc3-tigera-ca-bundle\") pod \"calico-kube-controllers-859bf66984-8h8hn\" (UID: \"9cd09edd-44db-4b31-b369-b622badfedc3\") " pod="calico-system/calico-kube-controllers-859bf66984-8h8hn" Nov 1 00:44:28.539303 kubelet[2501]: I1101 00:44:28.538618 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48hx5\" (UniqueName: \"kubernetes.io/projected/af4cf953-58b5-4727-a1f5-dcd340748032-kube-api-access-48hx5\") pod \"calico-apiserver-5d69b6c6c-vws84\" (UID: \"af4cf953-58b5-4727-a1f5-dcd340748032\") " pod="calico-apiserver/calico-apiserver-5d69b6c6c-vws84" Nov 1 00:44:28.539303 kubelet[2501]: I1101 00:44:28.538736 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/135646f8-0c66-45b5-80ce-9bb45c825de7-config\") pod \"goldmane-666569f655-bxfpm\" (UID: \"135646f8-0c66-45b5-80ce-9bb45c825de7\") " pod="calico-system/goldmane-666569f655-bxfpm" Nov 1 00:44:28.539303 kubelet[2501]: I1101 00:44:28.538809 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4htfl\" (UniqueName: \"kubernetes.io/projected/2eed7e34-9d6a-4f7a-a712-82a7a599696d-kube-api-access-4htfl\") pod \"coredns-668d6bf9bc-86gd2\" (UID: \"2eed7e34-9d6a-4f7a-a712-82a7a599696d\") " pod="kube-system/coredns-668d6bf9bc-86gd2" Nov 1 00:44:28.539303 kubelet[2501]: I1101 00:44:28.538884 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/af4cf953-58b5-4727-a1f5-dcd340748032-calico-apiserver-certs\") pod \"calico-apiserver-5d69b6c6c-vws84\" (UID: \"af4cf953-58b5-4727-a1f5-dcd340748032\") " pod="calico-apiserver/calico-apiserver-5d69b6c6c-vws84" Nov 1 00:44:28.539849 kubelet[2501]: I1101 00:44:28.538944 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/135646f8-0c66-45b5-80ce-9bb45c825de7-goldmane-ca-bundle\") pod \"goldmane-666569f655-bxfpm\" (UID: \"135646f8-0c66-45b5-80ce-9bb45c825de7\") " pod="calico-system/goldmane-666569f655-bxfpm" Nov 1 00:44:28.542850 systemd[1]: Created slice kubepods-besteffort-podce15fc81_d33c_45b3_b08a_5d312fb076f0.slice. Nov 1 00:44:28.547907 env[1565]: time="2025-11-01T00:44:28.547777635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qpkjg,Uid:ce15fc81-d33c-45b3-b08a-5d312fb076f0,Namespace:calico-system,Attempt:0,}" Nov 1 00:44:28.772561 env[1565]: time="2025-11-01T00:44:28.772429127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4lsqg,Uid:027d8f2f-3806-4651-bd8a-463d116e5266,Namespace:kube-system,Attempt:0,}" Nov 1 00:44:28.778655 env[1565]: time="2025-11-01T00:44:28.778548131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-859bf66984-8h8hn,Uid:9cd09edd-44db-4b31-b369-b622badfedc3,Namespace:calico-system,Attempt:0,}" Nov 1 00:44:28.783583 env[1565]: time="2025-11-01T00:44:28.783476392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-86gd2,Uid:2eed7e34-9d6a-4f7a-a712-82a7a599696d,Namespace:kube-system,Attempt:0,}" Nov 1 00:44:28.788641 env[1565]: time="2025-11-01T00:44:28.788530927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d69b6c6c-s7kdv,Uid:2f6bb8ca-d7c7-4d64-919c-e85097fdc068,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:44:28.791558 env[1565]: time="2025-11-01T00:44:28.791445925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d69b6c6c-vws84,Uid:af4cf953-58b5-4727-a1f5-dcd340748032,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:44:28.794587 env[1565]: time="2025-11-01T00:44:28.794484960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-bxfpm,Uid:135646f8-0c66-45b5-80ce-9bb45c825de7,Namespace:calico-system,Attempt:0,}" Nov 1 00:44:28.797462 env[1565]: time="2025-11-01T00:44:28.797356721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6897698c76-c6xgv,Uid:6daa08f9-a020-4d78-b34b-6c75d3bd1afe,Namespace:calico-system,Attempt:0,}" Nov 1 00:44:28.830461 env[1565]: time="2025-11-01T00:44:28.830357629Z" level=info msg="shim disconnected" id=15bce36ac6ef77215a1f871c92cdd9ce535587e5f831cc7ad0771a30450356ae Nov 1 00:44:28.830687 env[1565]: time="2025-11-01T00:44:28.830457400Z" level=warning msg="cleaning up after shim disconnected" id=15bce36ac6ef77215a1f871c92cdd9ce535587e5f831cc7ad0771a30450356ae namespace=k8s.io Nov 1 00:44:28.830687 env[1565]: time="2025-11-01T00:44:28.830512864Z" level=info msg="cleaning up dead shim" Nov 1 00:44:28.847944 env[1565]: time="2025-11-01T00:44:28.847800222Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:44:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3406 runtime=io.containerd.runc.v2\n" Nov 1 00:44:28.903308 env[1565]: time="2025-11-01T00:44:28.903235126Z" level=error msg="Failed to destroy network for sandbox \"87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:44:28.903763 env[1565]: time="2025-11-01T00:44:28.903732562Z" level=error msg="encountered an error cleaning up failed sandbox \"87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:44:28.903839 env[1565]: time="2025-11-01T00:44:28.903778096Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qpkjg,Uid:ce15fc81-d33c-45b3-b08a-5d312fb076f0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:44:28.904327 kubelet[2501]: E1101 00:44:28.903966 2501 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:44:28.904327 kubelet[2501]: E1101 00:44:28.904037 2501 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qpkjg" Nov 1 00:44:28.904327 kubelet[2501]: E1101 00:44:28.904059 2501 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qpkjg" Nov 1 00:44:28.904483 kubelet[2501]: E1101 00:44:28.904103 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qpkjg_calico-system(ce15fc81-d33c-45b3-b08a-5d312fb076f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qpkjg_calico-system(ce15fc81-d33c-45b3-b08a-5d312fb076f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qpkjg" podUID="ce15fc81-d33c-45b3-b08a-5d312fb076f0" Nov 1 00:44:28.906189 env[1565]: time="2025-11-01T00:44:28.906133493Z" level=error msg="Failed to destroy network for sandbox \"3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:44:28.906447 env[1565]: time="2025-11-01T00:44:28.906420877Z" level=error msg="encountered an error cleaning up failed sandbox \"3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:44:28.906498 env[1565]: time="2025-11-01T00:44:28.906473357Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d69b6c6c-s7kdv,Uid:2f6bb8ca-d7c7-4d64-919c-e85097fdc068,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:44:28.906673 kubelet[2501]: E1101 00:44:28.906637 2501 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:44:28.906746 kubelet[2501]: E1101 00:44:28.906696 2501 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d69b6c6c-s7kdv" Nov 1 00:44:28.906746 kubelet[2501]: E1101 00:44:28.906718 2501 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d69b6c6c-s7kdv" Nov 1 00:44:28.906847 kubelet[2501]: E1101 00:44:28.906765 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d69b6c6c-s7kdv_calico-apiserver(2f6bb8ca-d7c7-4d64-919c-e85097fdc068)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d69b6c6c-s7kdv_calico-apiserver(2f6bb8ca-d7c7-4d64-919c-e85097fdc068)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-s7kdv" podUID="2f6bb8ca-d7c7-4d64-919c-e85097fdc068" Nov 1 00:44:28.907063 env[1565]: time="2025-11-01T00:44:28.907044403Z" level=error msg="Failed to destroy network for sandbox \"9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:44:28.907248 env[1565]: time="2025-11-01T00:44:28.907230990Z" level=error msg="encountered an error cleaning up failed sandbox \"9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:44:28.907283 env[1565]: time="2025-11-01T00:44:28.907257903Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4lsqg,Uid:027d8f2f-3806-4651-bd8a-463d116e5266,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:44:28.907361 kubelet[2501]: E1101 00:44:28.907349 2501 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:44:28.907430 kubelet[2501]: E1101 00:44:28.907369 2501 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4lsqg" Nov 1 00:44:28.907430 kubelet[2501]: E1101 00:44:28.907385 2501 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4lsqg" Nov 1 00:44:28.907430 kubelet[2501]: E1101 00:44:28.907406 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-4lsqg_kube-system(027d8f2f-3806-4651-bd8a-463d116e5266)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-4lsqg_kube-system(027d8f2f-3806-4651-bd8a-463d116e5266)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-4lsqg" podUID="027d8f2f-3806-4651-bd8a-463d116e5266" Nov 1 00:44:28.907632 env[1565]: time="2025-11-01T00:44:28.907605206Z" level=error msg="Failed to destroy network for sandbox \"de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:44:28.907830 env[1565]: time="2025-11-01T00:44:28.907807731Z" level=error msg="Failed to destroy network for sandbox \"3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:44:28.907922 env[1565]: time="2025-11-01T00:44:28.907900715Z" level=error msg="encountered an error cleaning up failed sandbox \"de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:44:28.907958 env[1565]: time="2025-11-01T00:44:28.907937653Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-86gd2,Uid:2eed7e34-9d6a-4f7a-a712-82a7a599696d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:44:28.908027 env[1565]: time="2025-11-01T00:44:28.908010547Z" level=error msg="encountered an error cleaning up failed sandbox \"3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:44:28.908063 env[1565]: time="2025-11-01T00:44:28.908037013Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-859bf66984-8h8hn,Uid:9cd09edd-44db-4b31-b369-b622badfedc3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:44:28.908109 kubelet[2501]: E1101 00:44:28.908035 2501 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:44:28.908109 kubelet[2501]: E1101 00:44:28.908063 2501 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-86gd2" Nov 1 00:44:28.908109 kubelet[2501]: E1101 00:44:28.908077 2501 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-86gd2" Nov 1 00:44:28.908185 kubelet[2501]: E1101 00:44:28.908101 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-86gd2_kube-system(2eed7e34-9d6a-4f7a-a712-82a7a599696d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-86gd2_kube-system(2eed7e34-9d6a-4f7a-a712-82a7a599696d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-86gd2" podUID="2eed7e34-9d6a-4f7a-a712-82a7a599696d" Nov 1 00:44:28.908185 kubelet[2501]: E1101 00:44:28.908112 2501 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:44:28.908185 kubelet[2501]: E1101 00:44:28.908139 2501 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-859bf66984-8h8hn" Nov 1 00:44:28.908277 kubelet[2501]: E1101 00:44:28.908157 2501 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-859bf66984-8h8hn" Nov 1 00:44:28.908277 kubelet[2501]: E1101 00:44:28.908184 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-859bf66984-8h8hn_calico-system(9cd09edd-44db-4b31-b369-b622badfedc3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-859bf66984-8h8hn_calico-system(9cd09edd-44db-4b31-b369-b622badfedc3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-859bf66984-8h8hn" podUID="9cd09edd-44db-4b31-b369-b622badfedc3" Nov 1 00:44:28.908970 env[1565]: time="2025-11-01T00:44:28.908950469Z" level=error msg="Failed to destroy network for sandbox \"33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:44:28.909106 env[1565]: time="2025-11-01T00:44:28.909091354Z" level=error msg="encountered an error cleaning up failed sandbox \"33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:44:28.909139 env[1565]: time="2025-11-01T00:44:28.909113487Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6897698c76-c6xgv,Uid:6daa08f9-a020-4d78-b34b-6c75d3bd1afe,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:44:28.909194 kubelet[2501]: E1101 00:44:28.909178 2501 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:44:28.909220 kubelet[2501]: E1101 00:44:28.909203 2501 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6897698c76-c6xgv" Nov 1 00:44:28.909249 kubelet[2501]: E1101 00:44:28.909218 2501 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6897698c76-c6xgv" Nov 1 00:44:28.909249 kubelet[2501]: E1101 00:44:28.909238 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6897698c76-c6xgv_calico-system(6daa08f9-a020-4d78-b34b-6c75d3bd1afe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6897698c76-c6xgv_calico-system(6daa08f9-a020-4d78-b34b-6c75d3bd1afe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6897698c76-c6xgv" podUID="6daa08f9-a020-4d78-b34b-6c75d3bd1afe" Nov 1 00:44:28.910071 env[1565]: time="2025-11-01T00:44:28.910045379Z" level=error msg="Failed to destroy network for sandbox \"66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:44:28.910271 env[1565]: time="2025-11-01T00:44:28.910250132Z" level=error msg="encountered an error cleaning up failed sandbox \"66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:44:28.910309 env[1565]: time="2025-11-01T00:44:28.910289338Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d69b6c6c-vws84,Uid:af4cf953-58b5-4727-a1f5-dcd340748032,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:44:28.910388 kubelet[2501]: E1101 00:44:28.910373 2501 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:44:28.910420 kubelet[2501]: E1101 00:44:28.910400 2501 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d69b6c6c-vws84" Nov 1 00:44:28.910420 kubelet[2501]: E1101 00:44:28.910411 2501 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d69b6c6c-vws84" Nov 1 00:44:28.910466 kubelet[2501]: E1101 00:44:28.910434 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d69b6c6c-vws84_calico-apiserver(af4cf953-58b5-4727-a1f5-dcd340748032)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d69b6c6c-vws84_calico-apiserver(af4cf953-58b5-4727-a1f5-dcd340748032)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-vws84" podUID="af4cf953-58b5-4727-a1f5-dcd340748032" Nov 1 00:44:28.910697 env[1565]: time="2025-11-01T00:44:28.910681137Z" level=error msg="Failed to destroy network for sandbox \"f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:44:28.910834 env[1565]: time="2025-11-01T00:44:28.910820482Z" level=error msg="encountered an error cleaning up failed sandbox \"f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:44:28.910868 env[1565]: time="2025-11-01T00:44:28.910844691Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-bxfpm,Uid:135646f8-0c66-45b5-80ce-9bb45c825de7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:44:28.910923 kubelet[2501]: E1101 00:44:28.910912 2501 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:44:28.910950 kubelet[2501]: E1101 00:44:28.910930 2501 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-bxfpm" Nov 1 00:44:28.910950 kubelet[2501]: E1101 00:44:28.910940 2501 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-bxfpm" Nov 1 00:44:28.910993 kubelet[2501]: E1101 00:44:28.910956 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-bxfpm_calico-system(135646f8-0c66-45b5-80ce-9bb45c825de7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-bxfpm_calico-system(135646f8-0c66-45b5-80ce-9bb45c825de7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-bxfpm" podUID="135646f8-0c66-45b5-80ce-9bb45c825de7" Nov 1 00:44:29.614787 kubelet[2501]: I1101 00:44:29.614719 2501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78" Nov 1 00:44:29.616171 env[1565]: time="2025-11-01T00:44:29.616092470Z" level=info msg="StopPodSandbox for \"de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78\"" Nov 1 00:44:29.621453 kubelet[2501]: I1101 00:44:29.621378 2501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408" Nov 1 00:44:29.621801 env[1565]: time="2025-11-01T00:44:29.621735554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 1 00:44:29.622821 env[1565]: time="2025-11-01T00:44:29.622736569Z" level=info msg="StopPodSandbox for \"87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408\"" Nov 1 00:44:29.623945 kubelet[2501]: I1101 00:44:29.623877 2501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f" Nov 1 00:44:29.625342 env[1565]: time="2025-11-01T00:44:29.625239965Z" level=info msg="StopPodSandbox for \"9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f\"" Nov 1 00:44:29.625781 kubelet[2501]: I1101 00:44:29.625769 2501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd" Nov 1 00:44:29.626091 env[1565]: time="2025-11-01T00:44:29.626067296Z" level=info msg="StopPodSandbox for \"66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd\"" Nov 1 00:44:29.626308 kubelet[2501]: I1101 00:44:29.626296 2501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f" Nov 1 00:44:29.626636 env[1565]: time="2025-11-01T00:44:29.626615879Z" level=info msg="StopPodSandbox for \"33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f\"" Nov 1 00:44:29.626929 kubelet[2501]: I1101 00:44:29.626917 2501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516" Nov 1 00:44:29.627218 env[1565]: time="2025-11-01T00:44:29.627194993Z" level=info msg="StopPodSandbox for \"3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516\"" Nov 1 00:44:29.627428 kubelet[2501]: I1101 00:44:29.627416 2501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0" Nov 1 00:44:29.627739 env[1565]: time="2025-11-01T00:44:29.627710545Z" level=info msg="StopPodSandbox for \"f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0\"" Nov 1 00:44:29.628018 kubelet[2501]: I1101 00:44:29.628001 2501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c" Nov 1 00:44:29.628505 env[1565]: time="2025-11-01T00:44:29.628471654Z" level=info msg="StopPodSandbox for \"3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c\"" Nov 1 00:44:29.640609 env[1565]: time="2025-11-01T00:44:29.640550904Z" level=error msg="StopPodSandbox for \"de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78\" failed" error="failed to destroy network for sandbox \"de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:44:29.640819 kubelet[2501]: E1101 00:44:29.640761 2501 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78" Nov 1 00:44:29.640906 kubelet[2501]: E1101 00:44:29.640858 2501 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78"} Nov 1 00:44:29.640950 kubelet[2501]: E1101 00:44:29.640914 2501 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2eed7e34-9d6a-4f7a-a712-82a7a599696d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:44:29.640950 kubelet[2501]: E1101 00:44:29.640931 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2eed7e34-9d6a-4f7a-a712-82a7a599696d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-86gd2" podUID="2eed7e34-9d6a-4f7a-a712-82a7a599696d" Nov 1 00:44:29.641395 env[1565]: time="2025-11-01T00:44:29.641364209Z" level=error msg="StopPodSandbox for \"87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408\" failed" error="failed to destroy network for sandbox \"87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:44:29.641441 env[1565]: time="2025-11-01T00:44:29.641365459Z" level=error msg="StopPodSandbox for \"66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd\" failed" error="failed to destroy network for sandbox \"66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:44:29.641514 kubelet[2501]: E1101 00:44:29.641485 2501 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408" Nov 1 00:44:29.641564 kubelet[2501]: E1101 00:44:29.641521 2501 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408"} Nov 1 00:44:29.641564 kubelet[2501]: E1101 00:44:29.641543 2501 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ce15fc81-d33c-45b3-b08a-5d312fb076f0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:44:29.641654 kubelet[2501]: E1101 00:44:29.641560 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ce15fc81-d33c-45b3-b08a-5d312fb076f0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qpkjg" podUID="ce15fc81-d33c-45b3-b08a-5d312fb076f0" Nov 1 00:44:29.641654 kubelet[2501]: E1101 00:44:29.641485 2501 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd" Nov 1 00:44:29.641654 kubelet[2501]: E1101 00:44:29.641592 2501 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd"} Nov 1 00:44:29.641654 kubelet[2501]: E1101 00:44:29.641619 2501 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"af4cf953-58b5-4727-a1f5-dcd340748032\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:44:29.641780 kubelet[2501]: E1101 00:44:29.641636 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"af4cf953-58b5-4727-a1f5-dcd340748032\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-vws84" podUID="af4cf953-58b5-4727-a1f5-dcd340748032" Nov 1 00:44:29.643207 env[1565]: time="2025-11-01T00:44:29.643166963Z" level=error msg="StopPodSandbox for \"9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f\" failed" error="failed to destroy network for sandbox \"9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:44:29.643322 kubelet[2501]: E1101 00:44:29.643300 2501 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f" Nov 1 00:44:29.643375 kubelet[2501]: E1101 00:44:29.643331 2501 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f"} Nov 1 00:44:29.643375 kubelet[2501]: E1101 00:44:29.643361 2501 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"027d8f2f-3806-4651-bd8a-463d116e5266\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:44:29.643455 kubelet[2501]: E1101 00:44:29.643383 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"027d8f2f-3806-4651-bd8a-463d116e5266\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-4lsqg" podUID="027d8f2f-3806-4651-bd8a-463d116e5266" Nov 1 00:44:29.643501 env[1565]: time="2025-11-01T00:44:29.643443839Z" level=error msg="StopPodSandbox for \"33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f\" failed" error="failed to destroy network for sandbox \"33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:44:29.643568 kubelet[2501]: E1101 00:44:29.643547 2501 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f" Nov 1 00:44:29.643607 kubelet[2501]: E1101 00:44:29.643574 2501 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f"} Nov 1 00:44:29.643607 kubelet[2501]: E1101 00:44:29.643597 2501 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6daa08f9-a020-4d78-b34b-6c75d3bd1afe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:44:29.643671 kubelet[2501]: E1101 00:44:29.643614 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6daa08f9-a020-4d78-b34b-6c75d3bd1afe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6897698c76-c6xgv" podUID="6daa08f9-a020-4d78-b34b-6c75d3bd1afe" Nov 1 00:44:29.643722 env[1565]: time="2025-11-01T00:44:29.643699406Z" level=error msg="StopPodSandbox for \"3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516\" failed" error="failed to destroy network for sandbox \"3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:44:29.643793 kubelet[2501]: E1101 00:44:29.643777 2501 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516" Nov 1 00:44:29.643818 kubelet[2501]: E1101 00:44:29.643799 2501 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516"} Nov 1 00:44:29.643839 kubelet[2501]: E1101 00:44:29.643822 2501 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2f6bb8ca-d7c7-4d64-919c-e85097fdc068\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:44:29.643875 kubelet[2501]: E1101 00:44:29.643839 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2f6bb8ca-d7c7-4d64-919c-e85097fdc068\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-s7kdv" podUID="2f6bb8ca-d7c7-4d64-919c-e85097fdc068" Nov 1 00:44:29.645553 env[1565]: time="2025-11-01T00:44:29.645529530Z" level=error msg="StopPodSandbox for \"f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0\" failed" error="failed to destroy network for sandbox \"f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:44:29.645610 kubelet[2501]: E1101 00:44:29.645598 2501 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0" Nov 1 00:44:29.645643 kubelet[2501]: E1101 00:44:29.645613 2501 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0"} Nov 1 00:44:29.645643 kubelet[2501]: E1101 00:44:29.645627 2501 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"135646f8-0c66-45b5-80ce-9bb45c825de7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:44:29.645643 kubelet[2501]: E1101 00:44:29.645637 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"135646f8-0c66-45b5-80ce-9bb45c825de7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-bxfpm" podUID="135646f8-0c66-45b5-80ce-9bb45c825de7" Nov 1 00:44:29.645798 env[1565]: time="2025-11-01T00:44:29.645781949Z" level=error msg="StopPodSandbox for \"3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c\" failed" error="failed to destroy network for sandbox \"3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:44:29.645886 kubelet[2501]: E1101 00:44:29.645872 2501 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c" Nov 1 00:44:29.645915 kubelet[2501]: E1101 00:44:29.645890 2501 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c"} Nov 1 00:44:29.645915 kubelet[2501]: E1101 00:44:29.645904 2501 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9cd09edd-44db-4b31-b369-b622badfedc3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:44:29.645969 kubelet[2501]: E1101 00:44:29.645914 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9cd09edd-44db-4b31-b369-b622badfedc3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-859bf66984-8h8hn" podUID="9cd09edd-44db-4b31-b369-b622badfedc3" Nov 1 00:44:34.047121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1131395200.mount: Deactivated successfully. Nov 1 00:44:34.082393 env[1565]: time="2025-11-01T00:44:34.082310114Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:44:34.083739 env[1565]: time="2025-11-01T00:44:34.083697119Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:44:34.086459 env[1565]: time="2025-11-01T00:44:34.086371744Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:44:34.089092 env[1565]: time="2025-11-01T00:44:34.089008549Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:44:34.090418 env[1565]: time="2025-11-01T00:44:34.090322717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 1 00:44:34.103066 env[1565]: time="2025-11-01T00:44:34.103048038Z" level=info msg="CreateContainer within sandbox \"453373e240104f07be56e55b6456f0f118330092894184896c8c77a1beb92421\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 1 00:44:34.108380 env[1565]: time="2025-11-01T00:44:34.108355462Z" level=info msg="CreateContainer within sandbox \"453373e240104f07be56e55b6456f0f118330092894184896c8c77a1beb92421\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8a57e2bd0da0bc7f0bc06c27051374809b8dbdef1bf675f5f9c803d7fa524d06\"" Nov 1 00:44:34.108629 env[1565]: time="2025-11-01T00:44:34.108617143Z" level=info msg="StartContainer for \"8a57e2bd0da0bc7f0bc06c27051374809b8dbdef1bf675f5f9c803d7fa524d06\"" Nov 1 00:44:34.116975 systemd[1]: Started cri-containerd-8a57e2bd0da0bc7f0bc06c27051374809b8dbdef1bf675f5f9c803d7fa524d06.scope. Nov 1 00:44:34.125000 audit[3943]: AVC avc: denied { perfmon } for pid=3943 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:34.153296 kernel: kauditd_printk_skb: 34 callbacks suppressed Nov 1 00:44:34.153392 kernel: audit: type=1400 audit(1761957874.125:990): avc: denied { perfmon } for pid=3943 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:34.125000 audit[3943]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001bd6b0 a2=3c a3=7fde61237108 items=0 ppid=3118 pid=3943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:34.315941 kernel: audit: type=1300 audit(1761957874.125:990): arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001bd6b0 a2=3c a3=7fde61237108 items=0 ppid=3118 pid=3943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:34.315978 kernel: audit: type=1327 audit(1761957874.125:990): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861353765326264306461306263376630626330366332373035313337 Nov 1 00:44:34.125000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861353765326264306461306263376630626330366332373035313337 Nov 1 00:44:34.409421 kernel: audit: type=1400 audit(1761957874.125:991): avc: denied { bpf } for pid=3943 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:34.125000 audit[3943]: AVC avc: denied { bpf } for pid=3943 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:34.473060 kernel: audit: type=1400 audit(1761957874.125:991): avc: denied { bpf } for pid=3943 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:34.125000 audit[3943]: AVC avc: denied { bpf } for pid=3943 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:34.536617 kernel: audit: type=1400 audit(1761957874.125:991): avc: denied { bpf } for pid=3943 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:34.125000 audit[3943]: AVC avc: denied { bpf } for pid=3943 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:34.600173 kernel: audit: type=1400 audit(1761957874.125:991): avc: denied { perfmon } for pid=3943 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:34.125000 audit[3943]: AVC avc: denied { perfmon } for pid=3943 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:34.605812 env[1565]: time="2025-11-01T00:44:34.605782187Z" level=info msg="StartContainer for \"8a57e2bd0da0bc7f0bc06c27051374809b8dbdef1bf675f5f9c803d7fa524d06\" returns successfully" Nov 1 00:44:34.125000 audit[3943]: AVC avc: denied { perfmon } for pid=3943 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:34.728121 kernel: audit: type=1400 audit(1761957874.125:991): avc: denied { perfmon } for pid=3943 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:34.728186 kernel: audit: type=1400 audit(1761957874.125:991): avc: denied { perfmon } for pid=3943 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:34.125000 audit[3943]: AVC avc: denied { perfmon } for pid=3943 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:34.735778 kubelet[2501]: I1101 00:44:34.735740 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-k97x5" podStartSLOduration=1.462872604 podStartE2EDuration="16.735728555s" podCreationTimestamp="2025-11-01 00:44:18 +0000 UTC" firstStartedPulling="2025-11-01 00:44:18.819288281 +0000 UTC m=+18.344218317" lastFinishedPulling="2025-11-01 00:44:34.092144206 +0000 UTC m=+33.617074268" observedRunningTime="2025-11-01 00:44:34.735303984 +0000 UTC m=+34.260234003" watchObservedRunningTime="2025-11-01 00:44:34.735728555 +0000 UTC m=+34.260658572" Nov 1 00:44:34.125000 audit[3943]: AVC avc: denied { perfmon } for pid=3943 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:34.856155 kernel: audit: type=1400 audit(1761957874.125:991): avc: denied { perfmon } for pid=3943 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:34.125000 audit[3943]: AVC avc: denied { perfmon } for pid=3943 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:34.125000 audit[3943]: AVC avc: denied { bpf } for pid=3943 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:34.125000 audit[3943]: AVC avc: denied { bpf } for pid=3943 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:34.125000 audit: BPF prog-id=133 op=LOAD Nov 1 00:44:34.125000 audit[3943]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001bd9d8 a2=78 a3=c0002d5c88 items=0 ppid=3118 pid=3943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:34.125000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861353765326264306461306263376630626330366332373035313337 Nov 1 00:44:34.217000 audit[3943]: AVC avc: denied { bpf } for pid=3943 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:34.217000 audit[3943]: AVC avc: denied { bpf } for pid=3943 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:34.217000 audit[3943]: AVC avc: denied { perfmon } for pid=3943 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:34.217000 audit[3943]: AVC avc: denied { perfmon } for pid=3943 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:34.217000 audit[3943]: AVC avc: denied { perfmon } for pid=3943 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:34.217000 audit[3943]: AVC avc: denied { perfmon } for pid=3943 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:34.217000 audit[3943]: AVC avc: denied { perfmon } for pid=3943 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:34.217000 audit[3943]: AVC avc: denied { bpf } for pid=3943 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:34.217000 audit[3943]: AVC avc: denied { bpf } for pid=3943 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:34.217000 audit: BPF prog-id=134 op=LOAD Nov 1 00:44:34.217000 audit[3943]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c0001bd770 a2=78 a3=c0002d5cd8 items=0 ppid=3118 pid=3943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:34.217000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861353765326264306461306263376630626330366332373035313337 Nov 1 00:44:34.408000 audit: BPF prog-id=134 op=UNLOAD Nov 1 00:44:34.408000 audit: BPF prog-id=133 op=UNLOAD Nov 1 00:44:34.408000 audit[3943]: AVC avc: denied { bpf } for pid=3943 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:34.408000 audit[3943]: AVC avc: denied { bpf } for pid=3943 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:34.408000 audit[3943]: AVC avc: denied { bpf } for pid=3943 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:34.408000 audit[3943]: AVC avc: denied { perfmon } for pid=3943 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:34.408000 audit[3943]: AVC avc: denied { perfmon } for pid=3943 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:34.408000 audit[3943]: AVC avc: denied { perfmon } for pid=3943 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:34.408000 audit[3943]: AVC avc: denied { perfmon } for pid=3943 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:34.408000 audit[3943]: AVC avc: denied { perfmon } for pid=3943 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:34.408000 audit[3943]: AVC avc: denied { bpf } for pid=3943 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:34.408000 audit[3943]: AVC avc: denied { bpf } for pid=3943 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:34.408000 audit: BPF prog-id=135 op=LOAD Nov 1 00:44:34.408000 audit[3943]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001bdc30 a2=78 a3=c0002d5d68 items=0 ppid=3118 pid=3943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:34.408000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861353765326264306461306263376630626330366332373035313337 Nov 1 00:44:34.939329 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 1 00:44:34.939381 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 1 00:44:35.062511 env[1565]: time="2025-11-01T00:44:35.062439542Z" level=info msg="StopPodSandbox for \"33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f\"" Nov 1 00:44:35.113075 env[1565]: 2025-11-01 00:44:35.095 [INFO][4044] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f" Nov 1 00:44:35.113075 env[1565]: 2025-11-01 00:44:35.095 [INFO][4044] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f" iface="eth0" netns="/var/run/netns/cni-4c8eb302-df3f-7179-b18c-2d4ed3e5ef9c" Nov 1 00:44:35.113075 env[1565]: 2025-11-01 00:44:35.095 [INFO][4044] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f" iface="eth0" netns="/var/run/netns/cni-4c8eb302-df3f-7179-b18c-2d4ed3e5ef9c" Nov 1 00:44:35.113075 env[1565]: 2025-11-01 00:44:35.096 [INFO][4044] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f" iface="eth0" netns="/var/run/netns/cni-4c8eb302-df3f-7179-b18c-2d4ed3e5ef9c" Nov 1 00:44:35.113075 env[1565]: 2025-11-01 00:44:35.096 [INFO][4044] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f" Nov 1 00:44:35.113075 env[1565]: 2025-11-01 00:44:35.096 [INFO][4044] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f" Nov 1 00:44:35.113075 env[1565]: 2025-11-01 00:44:35.105 [INFO][4072] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f" HandleID="k8s-pod-network.33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f" Workload="ci--3510.3.8--n--3bc793b712-k8s-whisker--6897698c76--c6xgv-eth0" Nov 1 00:44:35.113075 env[1565]: 2025-11-01 00:44:35.106 [INFO][4072] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:35.113075 env[1565]: 2025-11-01 00:44:35.106 [INFO][4072] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:35.113075 env[1565]: 2025-11-01 00:44:35.109 [WARNING][4072] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f" HandleID="k8s-pod-network.33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f" Workload="ci--3510.3.8--n--3bc793b712-k8s-whisker--6897698c76--c6xgv-eth0" Nov 1 00:44:35.113075 env[1565]: 2025-11-01 00:44:35.109 [INFO][4072] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f" HandleID="k8s-pod-network.33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f" Workload="ci--3510.3.8--n--3bc793b712-k8s-whisker--6897698c76--c6xgv-eth0" Nov 1 00:44:35.113075 env[1565]: 2025-11-01 00:44:35.110 [INFO][4072] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:35.113075 env[1565]: 2025-11-01 00:44:35.112 [INFO][4044] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f" Nov 1 00:44:35.113508 env[1565]: time="2025-11-01T00:44:35.113098488Z" level=info msg="TearDown network for sandbox \"33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f\" successfully" Nov 1 00:44:35.113508 env[1565]: time="2025-11-01T00:44:35.113117282Z" level=info msg="StopPodSandbox for \"33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f\" returns successfully" Nov 1 00:44:35.114600 systemd[1]: run-netns-cni\x2d4c8eb302\x2ddf3f\x2d7179\x2db18c\x2d2d4ed3e5ef9c.mount: Deactivated successfully. Nov 1 00:44:35.189489 kubelet[2501]: I1101 00:44:35.189435 2501 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6daa08f9-a020-4d78-b34b-6c75d3bd1afe-whisker-backend-key-pair\") pod \"6daa08f9-a020-4d78-b34b-6c75d3bd1afe\" (UID: \"6daa08f9-a020-4d78-b34b-6c75d3bd1afe\") " Nov 1 00:44:35.189489 kubelet[2501]: I1101 00:44:35.189462 2501 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bvfj\" (UniqueName: \"kubernetes.io/projected/6daa08f9-a020-4d78-b34b-6c75d3bd1afe-kube-api-access-6bvfj\") pod \"6daa08f9-a020-4d78-b34b-6c75d3bd1afe\" (UID: \"6daa08f9-a020-4d78-b34b-6c75d3bd1afe\") " Nov 1 00:44:35.189643 kubelet[2501]: I1101 00:44:35.189511 2501 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6daa08f9-a020-4d78-b34b-6c75d3bd1afe-whisker-ca-bundle\") pod \"6daa08f9-a020-4d78-b34b-6c75d3bd1afe\" (UID: \"6daa08f9-a020-4d78-b34b-6c75d3bd1afe\") " Nov 1 00:44:35.189840 kubelet[2501]: I1101 00:44:35.189791 2501 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6daa08f9-a020-4d78-b34b-6c75d3bd1afe-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "6daa08f9-a020-4d78-b34b-6c75d3bd1afe" (UID: "6daa08f9-a020-4d78-b34b-6c75d3bd1afe"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:44:35.191466 kubelet[2501]: I1101 00:44:35.191421 2501 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6daa08f9-a020-4d78-b34b-6c75d3bd1afe-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "6daa08f9-a020-4d78-b34b-6c75d3bd1afe" (UID: "6daa08f9-a020-4d78-b34b-6c75d3bd1afe"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:44:35.191580 kubelet[2501]: I1101 00:44:35.191537 2501 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6daa08f9-a020-4d78-b34b-6c75d3bd1afe-kube-api-access-6bvfj" (OuterVolumeSpecName: "kube-api-access-6bvfj") pod "6daa08f9-a020-4d78-b34b-6c75d3bd1afe" (UID: "6daa08f9-a020-4d78-b34b-6c75d3bd1afe"). InnerVolumeSpecName "kube-api-access-6bvfj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:44:35.193131 systemd[1]: var-lib-kubelet-pods-6daa08f9\x2da020\x2d4d78\x2db34b\x2d6c75d3bd1afe-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6bvfj.mount: Deactivated successfully. Nov 1 00:44:35.193199 systemd[1]: var-lib-kubelet-pods-6daa08f9\x2da020\x2d4d78\x2db34b\x2d6c75d3bd1afe-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 1 00:44:35.290532 kubelet[2501]: I1101 00:44:35.290409 2501 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6daa08f9-a020-4d78-b34b-6c75d3bd1afe-whisker-ca-bundle\") on node \"ci-3510.3.8-n-3bc793b712\" DevicePath \"\"" Nov 1 00:44:35.290532 kubelet[2501]: I1101 00:44:35.290483 2501 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6daa08f9-a020-4d78-b34b-6c75d3bd1afe-whisker-backend-key-pair\") on node \"ci-3510.3.8-n-3bc793b712\" DevicePath \"\"" Nov 1 00:44:35.290532 kubelet[2501]: I1101 00:44:35.290533 2501 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6bvfj\" (UniqueName: \"kubernetes.io/projected/6daa08f9-a020-4d78-b34b-6c75d3bd1afe-kube-api-access-6bvfj\") on node \"ci-3510.3.8-n-3bc793b712\" DevicePath \"\"" Nov 1 00:44:35.644950 systemd[1]: Removed slice kubepods-besteffort-pod6daa08f9_a020_4d78_b34b_6c75d3bd1afe.slice. Nov 1 00:44:35.674940 systemd[1]: Created slice kubepods-besteffort-pod30a69d9b_8fc2_4064_9c4c_2a9a4d33a87d.slice. Nov 1 00:44:35.795184 kubelet[2501]: I1101 00:44:35.795091 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d-whisker-backend-key-pair\") pod \"whisker-84c874fc74-qqjt8\" (UID: \"30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d\") " pod="calico-system/whisker-84c874fc74-qqjt8" Nov 1 00:44:35.796048 kubelet[2501]: I1101 00:44:35.795291 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d-whisker-ca-bundle\") pod \"whisker-84c874fc74-qqjt8\" (UID: \"30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d\") " pod="calico-system/whisker-84c874fc74-qqjt8" Nov 1 00:44:35.796048 kubelet[2501]: I1101 00:44:35.795371 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slg5d\" (UniqueName: \"kubernetes.io/projected/30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d-kube-api-access-slg5d\") pod \"whisker-84c874fc74-qqjt8\" (UID: \"30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d\") " pod="calico-system/whisker-84c874fc74-qqjt8" Nov 1 00:44:35.978371 env[1565]: time="2025-11-01T00:44:35.978211545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84c874fc74-qqjt8,Uid:30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d,Namespace:calico-system,Attempt:0,}" Nov 1 00:44:36.106693 systemd-networkd[1324]: cali2a6aaa6410c: Link UP Nov 1 00:44:36.163335 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 00:44:36.163376 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali2a6aaa6410c: link becomes ready Nov 1 00:44:36.163377 systemd-networkd[1324]: cali2a6aaa6410c: Gained carrier Nov 1 00:44:36.169804 env[1565]: 2025-11-01 00:44:36.030 [INFO][4133] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:44:36.169804 env[1565]: 2025-11-01 00:44:36.042 [INFO][4133] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--3bc793b712-k8s-whisker--84c874fc74--qqjt8-eth0 whisker-84c874fc74- calico-system 30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d 907 0 2025-11-01 00:44:35 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:84c874fc74 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-3510.3.8-n-3bc793b712 whisker-84c874fc74-qqjt8 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2a6aaa6410c [] [] }} ContainerID="28108ff85d401155747d2448e4bbe1932ca8a1ada12bacffc80b23e33e279b31" Namespace="calico-system" Pod="whisker-84c874fc74-qqjt8" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-whisker--84c874fc74--qqjt8-" Nov 1 00:44:36.169804 env[1565]: 2025-11-01 00:44:36.042 [INFO][4133] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="28108ff85d401155747d2448e4bbe1932ca8a1ada12bacffc80b23e33e279b31" Namespace="calico-system" Pod="whisker-84c874fc74-qqjt8" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-whisker--84c874fc74--qqjt8-eth0" Nov 1 00:44:36.169804 env[1565]: 2025-11-01 00:44:36.063 [INFO][4155] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28108ff85d401155747d2448e4bbe1932ca8a1ada12bacffc80b23e33e279b31" HandleID="k8s-pod-network.28108ff85d401155747d2448e4bbe1932ca8a1ada12bacffc80b23e33e279b31" Workload="ci--3510.3.8--n--3bc793b712-k8s-whisker--84c874fc74--qqjt8-eth0" Nov 1 00:44:36.169804 env[1565]: 2025-11-01 00:44:36.063 [INFO][4155] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="28108ff85d401155747d2448e4bbe1932ca8a1ada12bacffc80b23e33e279b31" HandleID="k8s-pod-network.28108ff85d401155747d2448e4bbe1932ca8a1ada12bacffc80b23e33e279b31" Workload="ci--3510.3.8--n--3bc793b712-k8s-whisker--84c874fc74--qqjt8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003be030), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-3bc793b712", "pod":"whisker-84c874fc74-qqjt8", "timestamp":"2025-11-01 00:44:36.063067103 +0000 UTC"}, Hostname:"ci-3510.3.8-n-3bc793b712", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:44:36.169804 env[1565]: 2025-11-01 00:44:36.063 [INFO][4155] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:36.169804 env[1565]: 2025-11-01 00:44:36.063 [INFO][4155] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:36.169804 env[1565]: 2025-11-01 00:44:36.063 [INFO][4155] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-3bc793b712' Nov 1 00:44:36.169804 env[1565]: 2025-11-01 00:44:36.069 [INFO][4155] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.28108ff85d401155747d2448e4bbe1932ca8a1ada12bacffc80b23e33e279b31" host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:36.169804 env[1565]: 2025-11-01 00:44:36.073 [INFO][4155] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:36.169804 env[1565]: 2025-11-01 00:44:36.076 [INFO][4155] ipam/ipam.go 511: Trying affinity for 192.168.3.64/26 host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:36.169804 env[1565]: 2025-11-01 00:44:36.078 [INFO][4155] ipam/ipam.go 158: Attempting to load block cidr=192.168.3.64/26 host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:36.169804 env[1565]: 2025-11-01 00:44:36.080 [INFO][4155] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.3.64/26 host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:36.169804 env[1565]: 2025-11-01 00:44:36.080 [INFO][4155] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.3.64/26 handle="k8s-pod-network.28108ff85d401155747d2448e4bbe1932ca8a1ada12bacffc80b23e33e279b31" host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:36.169804 env[1565]: 2025-11-01 00:44:36.082 [INFO][4155] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.28108ff85d401155747d2448e4bbe1932ca8a1ada12bacffc80b23e33e279b31 Nov 1 00:44:36.169804 env[1565]: 2025-11-01 00:44:36.085 [INFO][4155] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.3.64/26 handle="k8s-pod-network.28108ff85d401155747d2448e4bbe1932ca8a1ada12bacffc80b23e33e279b31" host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:36.169804 env[1565]: 2025-11-01 00:44:36.090 [INFO][4155] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.3.65/26] block=192.168.3.64/26 handle="k8s-pod-network.28108ff85d401155747d2448e4bbe1932ca8a1ada12bacffc80b23e33e279b31" host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:36.169804 env[1565]: 2025-11-01 00:44:36.090 [INFO][4155] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.3.65/26] handle="k8s-pod-network.28108ff85d401155747d2448e4bbe1932ca8a1ada12bacffc80b23e33e279b31" host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:36.169804 env[1565]: 2025-11-01 00:44:36.090 [INFO][4155] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:36.169804 env[1565]: 2025-11-01 00:44:36.090 [INFO][4155] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.3.65/26] IPv6=[] ContainerID="28108ff85d401155747d2448e4bbe1932ca8a1ada12bacffc80b23e33e279b31" HandleID="k8s-pod-network.28108ff85d401155747d2448e4bbe1932ca8a1ada12bacffc80b23e33e279b31" Workload="ci--3510.3.8--n--3bc793b712-k8s-whisker--84c874fc74--qqjt8-eth0" Nov 1 00:44:36.170635 env[1565]: 2025-11-01 00:44:36.092 [INFO][4133] cni-plugin/k8s.go 418: Populated endpoint ContainerID="28108ff85d401155747d2448e4bbe1932ca8a1ada12bacffc80b23e33e279b31" Namespace="calico-system" Pod="whisker-84c874fc74-qqjt8" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-whisker--84c874fc74--qqjt8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--3bc793b712-k8s-whisker--84c874fc74--qqjt8-eth0", GenerateName:"whisker-84c874fc74-", Namespace:"calico-system", SelfLink:"", UID:"30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 44, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"84c874fc74", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-3bc793b712", ContainerID:"", Pod:"whisker-84c874fc74-qqjt8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.3.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2a6aaa6410c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:44:36.170635 env[1565]: 2025-11-01 00:44:36.092 [INFO][4133] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.3.65/32] ContainerID="28108ff85d401155747d2448e4bbe1932ca8a1ada12bacffc80b23e33e279b31" Namespace="calico-system" Pod="whisker-84c874fc74-qqjt8" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-whisker--84c874fc74--qqjt8-eth0" Nov 1 00:44:36.170635 env[1565]: 2025-11-01 00:44:36.092 [INFO][4133] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2a6aaa6410c ContainerID="28108ff85d401155747d2448e4bbe1932ca8a1ada12bacffc80b23e33e279b31" Namespace="calico-system" Pod="whisker-84c874fc74-qqjt8" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-whisker--84c874fc74--qqjt8-eth0" Nov 1 00:44:36.170635 env[1565]: 2025-11-01 00:44:36.163 [INFO][4133] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28108ff85d401155747d2448e4bbe1932ca8a1ada12bacffc80b23e33e279b31" Namespace="calico-system" Pod="whisker-84c874fc74-qqjt8" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-whisker--84c874fc74--qqjt8-eth0" Nov 1 00:44:36.170635 env[1565]: 2025-11-01 00:44:36.163 [INFO][4133] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="28108ff85d401155747d2448e4bbe1932ca8a1ada12bacffc80b23e33e279b31" Namespace="calico-system" Pod="whisker-84c874fc74-qqjt8" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-whisker--84c874fc74--qqjt8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--3bc793b712-k8s-whisker--84c874fc74--qqjt8-eth0", GenerateName:"whisker-84c874fc74-", Namespace:"calico-system", SelfLink:"", UID:"30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 44, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"84c874fc74", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-3bc793b712", ContainerID:"28108ff85d401155747d2448e4bbe1932ca8a1ada12bacffc80b23e33e279b31", Pod:"whisker-84c874fc74-qqjt8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.3.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2a6aaa6410c", MAC:"8a:46:cb:85:69:10", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:44:36.170635 env[1565]: 2025-11-01 00:44:36.168 [INFO][4133] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="28108ff85d401155747d2448e4bbe1932ca8a1ada12bacffc80b23e33e279b31" Namespace="calico-system" Pod="whisker-84c874fc74-qqjt8" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-whisker--84c874fc74--qqjt8-eth0" Nov 1 00:44:36.174325 env[1565]: time="2025-11-01T00:44:36.174294880Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:44:36.174325 env[1565]: time="2025-11-01T00:44:36.174316266Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:44:36.174325 env[1565]: time="2025-11-01T00:44:36.174323209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:44:36.174425 env[1565]: time="2025-11-01T00:44:36.174384815Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/28108ff85d401155747d2448e4bbe1932ca8a1ada12bacffc80b23e33e279b31 pid=4187 runtime=io.containerd.runc.v2 Nov 1 00:44:36.181709 systemd[1]: Started cri-containerd-28108ff85d401155747d2448e4bbe1932ca8a1ada12bacffc80b23e33e279b31.scope. Nov 1 00:44:36.185000 audit[4255]: AVC avc: denied { write } for pid=4255 comm="tee" name="fd" dev="proc" ino=43042 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:44:36.185000 audit[4255]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffeeb47d7d3 a2=241 a3=1b6 items=1 ppid=4216 pid=4255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.185000 audit: CWD cwd="/etc/service/enabled/confd/log" Nov 1 00:44:36.185000 audit: PATH item=0 name="/dev/fd/63" inode=41096 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:44:36.185000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:44:36.186000 audit[4254]: AVC avc: denied { write } for pid=4254 comm="tee" name="fd" dev="proc" ino=30458 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:44:36.186000 audit[4254]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff75dc67d5 a2=241 a3=1b6 items=1 ppid=4214 pid=4254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.186000 audit: CWD cwd="/etc/service/enabled/cni/log" Nov 1 00:44:36.186000 audit: PATH item=0 name="/dev/fd/63" inode=40077 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:44:36.186000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:44:36.186000 audit[4262]: AVC avc: denied { write } for pid=4262 comm="tee" name="fd" dev="proc" ino=40080 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:44:36.186000 audit[4259]: AVC avc: denied { write } for pid=4259 comm="tee" name="fd" dev="proc" ino=43046 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:44:36.186000 audit[4259]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffef5ef87d3 a2=241 a3=1b6 items=1 ppid=4215 pid=4259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.186000 audit[4262]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffb6b317d4 a2=241 a3=1b6 items=1 ppid=4219 pid=4262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.186000 audit: CWD cwd="/etc/service/enabled/bird/log" Nov 1 00:44:36.186000 audit: CWD cwd="/etc/service/enabled/felix/log" Nov 1 00:44:36.186000 audit: PATH item=0 name="/dev/fd/63" inode=39155 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:44:36.186000 audit: PATH item=0 name="/dev/fd/63" inode=39154 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:44:36.186000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:44:36.186000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:44:36.187000 audit[4268]: AVC avc: denied { write } for pid=4268 comm="tee" name="fd" dev="proc" ino=39158 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:44:36.187000 audit[4268]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffeb6b227d3 a2=241 a3=1b6 items=1 ppid=4217 pid=4268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.187000 audit: CWD cwd="/etc/service/enabled/bird6/log" Nov 1 00:44:36.187000 audit: PATH item=0 name="/dev/fd/63" inode=34210 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:44:36.187000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:44:36.187000 audit[4265]: AVC avc: denied { write } for pid=4265 comm="tee" name="fd" dev="proc" ino=29623 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:44:36.187000 audit[4265]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffed0e217c3 a2=241 a3=1b6 items=1 ppid=4224 pid=4265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.187000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Nov 1 00:44:36.187000 audit: PATH item=0 name="/dev/fd/63" inode=32232 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:44:36.187000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:44:36.187000 audit[4292]: AVC avc: denied { write } for pid=4292 comm="tee" name="fd" dev="proc" ino=32235 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:44:36.187000 audit[4292]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcf7ec57c4 a2=241 a3=1b6 items=1 ppid=4218 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.187000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Nov 1 00:44:36.187000 audit: PATH item=0 name="/dev/fd/63" inode=37245 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:44:36.187000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:44:36.190000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.190000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.190000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.190000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.190000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.190000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.190000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.190000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.190000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.190000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.190000 audit: BPF prog-id=136 op=LOAD Nov 1 00:44:36.191000 audit[4197]: AVC avc: denied { bpf } for pid=4197 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.191000 audit[4197]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=4187 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.191000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238313038666638356434303131353537343764323434386534626265 Nov 1 00:44:36.191000 audit[4197]: AVC avc: denied { perfmon } for pid=4197 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.191000 audit[4197]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=4187 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.191000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238313038666638356434303131353537343764323434386534626265 Nov 1 00:44:36.191000 audit[4197]: AVC avc: denied { bpf } for pid=4197 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.191000 audit[4197]: AVC avc: denied { bpf } for pid=4197 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.191000 audit[4197]: AVC avc: denied { bpf } for pid=4197 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.191000 audit[4197]: AVC avc: denied { perfmon } for pid=4197 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.191000 audit[4197]: AVC avc: denied { perfmon } for pid=4197 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.191000 audit[4197]: AVC avc: denied { perfmon } for pid=4197 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.191000 audit[4197]: AVC avc: denied { perfmon } for pid=4197 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.191000 audit[4197]: AVC avc: denied { perfmon } for pid=4197 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.191000 audit[4197]: AVC avc: denied { bpf } for pid=4197 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.191000 audit[4197]: AVC avc: denied { bpf } for pid=4197 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.191000 audit: BPF prog-id=137 op=LOAD Nov 1 00:44:36.191000 audit[4197]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001979d8 a2=78 a3=c00021ec30 items=0 ppid=4187 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.191000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238313038666638356434303131353537343764323434386534626265 Nov 1 00:44:36.191000 audit[4197]: AVC avc: denied { bpf } for pid=4197 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.191000 audit[4197]: AVC avc: denied { bpf } for pid=4197 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.191000 audit[4197]: AVC avc: denied { perfmon } for pid=4197 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.191000 audit[4197]: AVC avc: denied { perfmon } for pid=4197 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.191000 audit[4197]: AVC avc: denied { perfmon } for pid=4197 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.191000 audit[4197]: AVC avc: denied { perfmon } for pid=4197 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.191000 audit[4197]: AVC avc: denied { perfmon } for pid=4197 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.191000 audit[4197]: AVC avc: denied { bpf } for pid=4197 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.191000 audit[4197]: AVC avc: denied { bpf } for pid=4197 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.191000 audit: BPF prog-id=138 op=LOAD Nov 1 00:44:36.191000 audit[4197]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000197770 a2=78 a3=c00021ec78 items=0 ppid=4187 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.191000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238313038666638356434303131353537343764323434386534626265 Nov 1 00:44:36.191000 audit: BPF prog-id=138 op=UNLOAD Nov 1 00:44:36.191000 audit: BPF prog-id=137 op=UNLOAD Nov 1 00:44:36.191000 audit[4197]: AVC avc: denied { bpf } for pid=4197 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.191000 audit[4197]: AVC avc: denied { bpf } for pid=4197 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.191000 audit[4197]: AVC avc: denied { bpf } for pid=4197 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.191000 audit[4197]: AVC avc: denied { perfmon } for pid=4197 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.191000 audit[4197]: AVC avc: denied { perfmon } for pid=4197 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.191000 audit[4197]: AVC avc: denied { perfmon } for pid=4197 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.191000 audit[4197]: AVC avc: denied { perfmon } for pid=4197 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.191000 audit[4197]: AVC avc: denied { perfmon } for pid=4197 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.191000 audit[4197]: AVC avc: denied { bpf } for pid=4197 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.191000 audit[4197]: AVC avc: denied { bpf } for pid=4197 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.191000 audit: BPF prog-id=139 op=LOAD Nov 1 00:44:36.191000 audit[4197]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000197c30 a2=78 a3=c00021f088 items=0 ppid=4187 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.191000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238313038666638356434303131353537343764323434386534626265 Nov 1 00:44:36.215990 env[1565]: time="2025-11-01T00:44:36.215961889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84c874fc74-qqjt8,Uid:30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d,Namespace:calico-system,Attempt:0,} returns sandbox id \"28108ff85d401155747d2448e4bbe1932ca8a1ada12bacffc80b23e33e279b31\"" Nov 1 00:44:36.216954 env[1565]: time="2025-11-01T00:44:36.216912157Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:44:36.249000 audit[4372]: AVC avc: denied { bpf } for pid=4372 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.249000 audit[4372]: AVC avc: denied { bpf } for pid=4372 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.249000 audit[4372]: AVC avc: denied { perfmon } for pid=4372 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.249000 audit[4372]: AVC avc: denied { perfmon } for pid=4372 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.249000 audit[4372]: AVC avc: denied { perfmon } for pid=4372 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.249000 audit[4372]: AVC avc: denied { perfmon } for pid=4372 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.249000 audit[4372]: AVC avc: denied { perfmon } for pid=4372 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.249000 audit[4372]: AVC avc: denied { bpf } for pid=4372 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.249000 audit[4372]: AVC avc: denied { bpf } for pid=4372 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.249000 audit: BPF prog-id=140 op=LOAD Nov 1 00:44:36.249000 audit[4372]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe336dcf80 a2=98 a3=1fffffffffffffff items=0 ppid=4234 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.249000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Nov 1 00:44:36.249000 audit: BPF prog-id=140 op=UNLOAD Nov 1 00:44:36.249000 audit[4372]: AVC avc: denied { bpf } for pid=4372 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.249000 audit[4372]: AVC avc: denied { bpf } for pid=4372 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.249000 audit[4372]: AVC avc: denied { perfmon } for pid=4372 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.249000 audit[4372]: AVC avc: denied { perfmon } for pid=4372 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.249000 audit[4372]: AVC avc: denied { perfmon } for pid=4372 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.249000 audit[4372]: AVC avc: denied { perfmon } for pid=4372 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.249000 audit[4372]: AVC avc: denied { perfmon } for pid=4372 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.249000 audit[4372]: AVC avc: denied { bpf } for pid=4372 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.249000 audit[4372]: AVC avc: denied { bpf } for pid=4372 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.249000 audit: BPF prog-id=141 op=LOAD Nov 1 00:44:36.249000 audit[4372]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe336dce60 a2=94 a3=3 items=0 ppid=4234 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.249000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Nov 1 00:44:36.249000 audit: BPF prog-id=141 op=UNLOAD Nov 1 00:44:36.249000 audit[4372]: AVC avc: denied { bpf } for pid=4372 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.249000 audit[4372]: AVC avc: denied { bpf } for pid=4372 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.249000 audit[4372]: AVC avc: denied { perfmon } for pid=4372 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.249000 audit[4372]: AVC avc: denied { perfmon } for pid=4372 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.249000 audit[4372]: AVC avc: denied { perfmon } for pid=4372 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.249000 audit[4372]: AVC avc: denied { perfmon } for pid=4372 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.249000 audit[4372]: AVC avc: denied { perfmon } for pid=4372 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.249000 audit[4372]: AVC avc: denied { bpf } for pid=4372 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.249000 audit[4372]: AVC avc: denied { bpf } for pid=4372 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.249000 audit: BPF prog-id=142 op=LOAD Nov 1 00:44:36.249000 audit[4372]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe336dcea0 a2=94 a3=7ffe336dd080 items=0 ppid=4234 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.249000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Nov 1 00:44:36.249000 audit: BPF prog-id=142 op=UNLOAD Nov 1 00:44:36.249000 audit[4372]: AVC avc: denied { perfmon } for pid=4372 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.249000 audit[4372]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7ffe336dcf70 a2=50 a3=a000000085 items=0 ppid=4234 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.249000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Nov 1 00:44:36.249000 audit[4373]: AVC avc: denied { bpf } for pid=4373 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.249000 audit[4373]: AVC avc: denied { bpf } for pid=4373 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.249000 audit[4373]: AVC avc: denied { perfmon } for pid=4373 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.249000 audit[4373]: AVC avc: denied { perfmon } for pid=4373 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.249000 audit[4373]: AVC avc: denied { perfmon } for pid=4373 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.249000 audit[4373]: AVC avc: denied { perfmon } for pid=4373 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.249000 audit[4373]: AVC avc: denied { perfmon } for pid=4373 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.249000 audit[4373]: AVC avc: denied { bpf } for pid=4373 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.249000 audit[4373]: AVC avc: denied { bpf } for pid=4373 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.249000 audit: BPF prog-id=143 op=LOAD Nov 1 00:44:36.249000 audit[4373]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe35c38560 a2=98 a3=3 items=0 ppid=4234 pid=4373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.249000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:44:36.249000 audit: BPF prog-id=143 op=UNLOAD Nov 1 00:44:36.250000 audit[4373]: AVC avc: denied { bpf } for pid=4373 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.250000 audit[4373]: AVC avc: denied { bpf } for pid=4373 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.250000 audit[4373]: AVC avc: denied { perfmon } for pid=4373 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.250000 audit[4373]: AVC avc: denied { perfmon } for pid=4373 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.250000 audit[4373]: AVC avc: denied { perfmon } for pid=4373 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.250000 audit[4373]: AVC avc: denied { perfmon } for pid=4373 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.250000 audit[4373]: AVC avc: denied { perfmon } for pid=4373 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.250000 audit[4373]: AVC avc: denied { bpf } for pid=4373 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.250000 audit[4373]: AVC avc: denied { bpf } for pid=4373 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.250000 audit: BPF prog-id=144 op=LOAD Nov 1 00:44:36.250000 audit[4373]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe35c38350 a2=94 a3=54428f items=0 ppid=4234 pid=4373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.250000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:44:36.250000 audit: BPF prog-id=144 op=UNLOAD Nov 1 00:44:36.250000 audit[4373]: AVC avc: denied { bpf } for pid=4373 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.250000 audit[4373]: AVC avc: denied { bpf } for pid=4373 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.250000 audit[4373]: AVC avc: denied { perfmon } for pid=4373 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.250000 audit[4373]: AVC avc: denied { perfmon } for pid=4373 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.250000 audit[4373]: AVC avc: denied { perfmon } for pid=4373 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.250000 audit[4373]: AVC avc: denied { perfmon } for pid=4373 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.250000 audit[4373]: AVC avc: denied { perfmon } for pid=4373 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.250000 audit[4373]: AVC avc: denied { bpf } for pid=4373 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.250000 audit[4373]: AVC avc: denied { bpf } for pid=4373 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.250000 audit: BPF prog-id=145 op=LOAD Nov 1 00:44:36.250000 audit[4373]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe35c38380 a2=94 a3=2 items=0 ppid=4234 pid=4373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.250000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:44:36.250000 audit: BPF prog-id=145 op=UNLOAD Nov 1 00:44:36.337000 audit[4373]: AVC avc: denied { bpf } for pid=4373 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.337000 audit[4373]: AVC avc: denied { bpf } for pid=4373 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.337000 audit[4373]: AVC avc: denied { perfmon } for pid=4373 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.337000 audit[4373]: AVC avc: denied { perfmon } for pid=4373 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.337000 audit[4373]: AVC avc: denied { perfmon } for pid=4373 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.337000 audit[4373]: AVC avc: denied { perfmon } for pid=4373 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.337000 audit[4373]: AVC avc: denied { perfmon } for pid=4373 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.337000 audit[4373]: AVC avc: denied { bpf } for pid=4373 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.337000 audit[4373]: AVC avc: denied { bpf } for pid=4373 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.337000 audit: BPF prog-id=146 op=LOAD Nov 1 00:44:36.337000 audit[4373]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe35c38240 a2=94 a3=1 items=0 ppid=4234 pid=4373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.337000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:44:36.337000 audit: BPF prog-id=146 op=UNLOAD Nov 1 00:44:36.337000 audit[4373]: AVC avc: denied { perfmon } for pid=4373 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.337000 audit[4373]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffe35c38310 a2=50 a3=7ffe35c383f0 items=0 ppid=4234 pid=4373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.337000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:44:36.344000 audit[4373]: AVC avc: denied { bpf } for pid=4373 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.344000 audit[4373]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe35c38250 a2=28 a3=0 items=0 ppid=4234 pid=4373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.344000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:44:36.344000 audit[4373]: AVC avc: denied { bpf } for pid=4373 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.344000 audit[4373]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe35c38280 a2=28 a3=0 items=0 ppid=4234 pid=4373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.344000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:44:36.344000 audit[4373]: AVC avc: denied { bpf } for pid=4373 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.344000 audit[4373]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe35c38190 a2=28 a3=0 items=0 ppid=4234 pid=4373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.344000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:44:36.344000 audit[4373]: AVC avc: denied { bpf } for pid=4373 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.344000 audit[4373]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe35c382a0 a2=28 a3=0 items=0 ppid=4234 pid=4373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.344000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:44:36.344000 audit[4373]: AVC avc: denied { bpf } for pid=4373 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.344000 audit[4373]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe35c38280 a2=28 a3=0 items=0 ppid=4234 pid=4373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.344000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:44:36.344000 audit[4373]: AVC avc: denied { bpf } for pid=4373 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.344000 audit[4373]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe35c38270 a2=28 a3=0 items=0 ppid=4234 pid=4373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.344000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:44:36.344000 audit[4373]: AVC avc: denied { bpf } for pid=4373 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.344000 audit[4373]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe35c382a0 a2=28 a3=0 items=0 ppid=4234 pid=4373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.344000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:44:36.344000 audit[4373]: AVC avc: denied { bpf } for pid=4373 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.344000 audit[4373]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe35c38280 a2=28 a3=0 items=0 ppid=4234 pid=4373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.344000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:44:36.344000 audit[4373]: AVC avc: denied { bpf } for pid=4373 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.344000 audit[4373]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe35c382a0 a2=28 a3=0 items=0 ppid=4234 pid=4373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.344000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:44:36.344000 audit[4373]: AVC avc: denied { bpf } for pid=4373 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.344000 audit[4373]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe35c38270 a2=28 a3=0 items=0 ppid=4234 pid=4373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.344000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:44:36.344000 audit[4373]: AVC avc: denied { bpf } for pid=4373 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.344000 audit[4373]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe35c382e0 a2=28 a3=0 items=0 ppid=4234 pid=4373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.344000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:44:36.344000 audit[4373]: AVC avc: denied { perfmon } for pid=4373 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.344000 audit[4373]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe35c38090 a2=50 a3=1 items=0 ppid=4234 pid=4373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.344000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:44:36.344000 audit[4373]: AVC avc: denied { bpf } for pid=4373 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.344000 audit[4373]: AVC avc: denied { bpf } for pid=4373 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.344000 audit[4373]: AVC avc: denied { perfmon } for pid=4373 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.344000 audit[4373]: AVC avc: denied { perfmon } for pid=4373 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.344000 audit[4373]: AVC avc: denied { perfmon } for pid=4373 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.344000 audit[4373]: AVC avc: denied { perfmon } for pid=4373 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.344000 audit[4373]: AVC avc: denied { perfmon } for pid=4373 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.344000 audit[4373]: AVC avc: denied { bpf } for pid=4373 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.344000 audit[4373]: AVC avc: denied { bpf } for pid=4373 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.344000 audit: BPF prog-id=147 op=LOAD Nov 1 00:44:36.344000 audit[4373]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe35c38090 a2=94 a3=5 items=0 ppid=4234 pid=4373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.344000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:44:36.344000 audit: BPF prog-id=147 op=UNLOAD Nov 1 00:44:36.344000 audit[4373]: AVC avc: denied { perfmon } for pid=4373 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.344000 audit[4373]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe35c38140 a2=50 a3=1 items=0 ppid=4234 pid=4373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.344000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:44:36.344000 audit[4373]: AVC avc: denied { bpf } for pid=4373 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.344000 audit[4373]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffe35c38260 a2=4 a3=38 items=0 ppid=4234 pid=4373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.344000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:44:36.344000 audit[4373]: AVC avc: denied { bpf } for pid=4373 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.344000 audit[4373]: AVC avc: denied { bpf } for pid=4373 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.344000 audit[4373]: AVC avc: denied { perfmon } for pid=4373 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.344000 audit[4373]: AVC avc: denied { bpf } for pid=4373 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.344000 audit[4373]: AVC avc: denied { perfmon } for pid=4373 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.344000 audit[4373]: AVC avc: denied { perfmon } for pid=4373 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.344000 audit[4373]: AVC avc: denied { perfmon } for pid=4373 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.344000 audit[4373]: AVC avc: denied { perfmon } for pid=4373 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.344000 audit[4373]: AVC avc: denied { perfmon } for pid=4373 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.344000 audit[4373]: AVC avc: denied { bpf } for pid=4373 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.344000 audit[4373]: AVC avc: denied { confidentiality } for pid=4373 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 00:44:36.344000 audit[4373]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe35c382b0 a2=94 a3=6 items=0 ppid=4234 pid=4373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.344000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:44:36.345000 audit[4373]: AVC avc: denied { bpf } for pid=4373 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.345000 audit[4373]: AVC avc: denied { bpf } for pid=4373 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.345000 audit[4373]: AVC avc: denied { perfmon } for pid=4373 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.345000 audit[4373]: AVC avc: denied { bpf } for pid=4373 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.345000 audit[4373]: AVC avc: denied { perfmon } for pid=4373 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.345000 audit[4373]: AVC avc: denied { perfmon } for pid=4373 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.345000 audit[4373]: AVC avc: denied { perfmon } for pid=4373 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.345000 audit[4373]: AVC avc: denied { perfmon } for pid=4373 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.345000 audit[4373]: AVC avc: denied { perfmon } for pid=4373 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.345000 audit[4373]: AVC avc: denied { bpf } for pid=4373 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.345000 audit[4373]: AVC avc: denied { confidentiality } for pid=4373 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 00:44:36.345000 audit[4373]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe35c37a60 a2=94 a3=88 items=0 ppid=4234 pid=4373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.345000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:44:36.345000 audit[4373]: AVC avc: denied { bpf } for pid=4373 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.345000 audit[4373]: AVC avc: denied { bpf } for pid=4373 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.345000 audit[4373]: AVC avc: denied { perfmon } for pid=4373 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.345000 audit[4373]: AVC avc: denied { bpf } for pid=4373 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.345000 audit[4373]: AVC avc: denied { perfmon } for pid=4373 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.345000 audit[4373]: AVC avc: denied { perfmon } for pid=4373 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.345000 audit[4373]: AVC avc: denied { perfmon } for pid=4373 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.345000 audit[4373]: AVC avc: denied { perfmon } for pid=4373 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.345000 audit[4373]: AVC avc: denied { perfmon } for pid=4373 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.345000 audit[4373]: AVC avc: denied { bpf } for pid=4373 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.345000 audit[4373]: AVC avc: denied { confidentiality } for pid=4373 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 00:44:36.345000 audit[4373]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe35c37a60 a2=94 a3=88 items=0 ppid=4234 pid=4373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.345000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:44:36.349000 audit[4376]: AVC avc: denied { bpf } for pid=4376 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.349000 audit[4376]: AVC avc: denied { bpf } for pid=4376 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.349000 audit[4376]: AVC avc: denied { perfmon } for pid=4376 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.349000 audit[4376]: AVC avc: denied { perfmon } for pid=4376 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.349000 audit[4376]: AVC avc: denied { perfmon } for pid=4376 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.349000 audit[4376]: AVC avc: denied { perfmon } for pid=4376 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.349000 audit[4376]: AVC avc: denied { perfmon } for pid=4376 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.349000 audit[4376]: AVC avc: denied { bpf } for pid=4376 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.349000 audit[4376]: AVC avc: denied { bpf } for pid=4376 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.349000 audit: BPF prog-id=148 op=LOAD Nov 1 00:44:36.349000 audit[4376]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdce9d4590 a2=98 a3=1999999999999999 items=0 ppid=4234 pid=4376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.349000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Nov 1 00:44:36.349000 audit: BPF prog-id=148 op=UNLOAD Nov 1 00:44:36.349000 audit[4376]: AVC avc: denied { bpf } for pid=4376 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.349000 audit[4376]: AVC avc: denied { bpf } for pid=4376 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.349000 audit[4376]: AVC avc: denied { perfmon } for pid=4376 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.349000 audit[4376]: AVC avc: denied { perfmon } for pid=4376 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.349000 audit[4376]: AVC avc: denied { perfmon } for pid=4376 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.349000 audit[4376]: AVC avc: denied { perfmon } for pid=4376 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.349000 audit[4376]: AVC avc: denied { perfmon } for pid=4376 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.349000 audit[4376]: AVC avc: denied { bpf } for pid=4376 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.349000 audit[4376]: AVC avc: denied { bpf } for pid=4376 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.349000 audit: BPF prog-id=149 op=LOAD Nov 1 00:44:36.349000 audit[4376]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdce9d4470 a2=94 a3=ffff items=0 ppid=4234 pid=4376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.349000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Nov 1 00:44:36.349000 audit: BPF prog-id=149 op=UNLOAD Nov 1 00:44:36.349000 audit[4376]: AVC avc: denied { bpf } for pid=4376 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.349000 audit[4376]: AVC avc: denied { bpf } for pid=4376 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.349000 audit[4376]: AVC avc: denied { perfmon } for pid=4376 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.349000 audit[4376]: AVC avc: denied { perfmon } for pid=4376 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.349000 audit[4376]: AVC avc: denied { perfmon } for pid=4376 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.349000 audit[4376]: AVC avc: denied { perfmon } for pid=4376 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.349000 audit[4376]: AVC avc: denied { perfmon } for pid=4376 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.349000 audit[4376]: AVC avc: denied { bpf } for pid=4376 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.349000 audit[4376]: AVC avc: denied { bpf } for pid=4376 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.349000 audit: BPF prog-id=150 op=LOAD Nov 1 00:44:36.349000 audit[4376]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdce9d44b0 a2=94 a3=7ffdce9d4690 items=0 ppid=4234 pid=4376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.349000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Nov 1 00:44:36.349000 audit: BPF prog-id=150 op=UNLOAD Nov 1 00:44:36.379058 systemd-networkd[1324]: vxlan.calico: Link UP Nov 1 00:44:36.379061 systemd-networkd[1324]: vxlan.calico: Gained carrier Nov 1 00:44:36.385000 audit[4401]: AVC avc: denied { bpf } for pid=4401 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.385000 audit[4401]: AVC avc: denied { bpf } for pid=4401 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.385000 audit[4401]: AVC avc: denied { perfmon } for pid=4401 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.385000 audit[4401]: AVC avc: denied { perfmon } for pid=4401 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.385000 audit[4401]: AVC avc: denied { perfmon } for pid=4401 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.385000 audit[4401]: AVC avc: denied { perfmon } for pid=4401 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.385000 audit[4401]: AVC avc: denied { perfmon } for pid=4401 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.385000 audit[4401]: AVC avc: denied { bpf } for pid=4401 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.385000 audit[4401]: AVC avc: denied { bpf } for pid=4401 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.385000 audit: BPF prog-id=151 op=LOAD Nov 1 00:44:36.385000 audit[4401]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffda7da9ea0 a2=98 a3=0 items=0 ppid=4234 pid=4401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.385000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:44:36.385000 audit: BPF prog-id=151 op=UNLOAD Nov 1 00:44:36.385000 audit[4401]: AVC avc: denied { bpf } for pid=4401 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.385000 audit[4401]: AVC avc: denied { bpf } for pid=4401 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.385000 audit[4401]: AVC avc: denied { perfmon } for pid=4401 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.385000 audit[4401]: AVC avc: denied { perfmon } for pid=4401 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.385000 audit[4401]: AVC avc: denied { perfmon } for pid=4401 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.385000 audit[4401]: AVC avc: denied { perfmon } for pid=4401 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.385000 audit[4401]: AVC avc: denied { perfmon } for pid=4401 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.385000 audit[4401]: AVC avc: denied { bpf } for pid=4401 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.385000 audit[4401]: AVC avc: denied { bpf } for pid=4401 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.385000 audit: BPF prog-id=152 op=LOAD Nov 1 00:44:36.385000 audit[4401]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffda7da9cb0 a2=94 a3=54428f items=0 ppid=4234 pid=4401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.385000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:44:36.386000 audit: BPF prog-id=152 op=UNLOAD Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { bpf } for pid=4401 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { bpf } for pid=4401 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { perfmon } for pid=4401 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { perfmon } for pid=4401 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { perfmon } for pid=4401 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { perfmon } for pid=4401 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { perfmon } for pid=4401 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { bpf } for pid=4401 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { bpf } for pid=4401 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit: BPF prog-id=153 op=LOAD Nov 1 00:44:36.386000 audit[4401]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffda7da9ce0 a2=94 a3=2 items=0 ppid=4234 pid=4401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.386000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:44:36.386000 audit: BPF prog-id=153 op=UNLOAD Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { bpf } for pid=4401 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffda7da9bb0 a2=28 a3=0 items=0 ppid=4234 pid=4401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.386000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { bpf } for pid=4401 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffda7da9be0 a2=28 a3=0 items=0 ppid=4234 pid=4401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.386000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { bpf } for pid=4401 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffda7da9af0 a2=28 a3=0 items=0 ppid=4234 pid=4401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.386000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { bpf } for pid=4401 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffda7da9c00 a2=28 a3=0 items=0 ppid=4234 pid=4401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.386000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { bpf } for pid=4401 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffda7da9be0 a2=28 a3=0 items=0 ppid=4234 pid=4401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.386000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { bpf } for pid=4401 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffda7da9bd0 a2=28 a3=0 items=0 ppid=4234 pid=4401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.386000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { bpf } for pid=4401 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffda7da9c00 a2=28 a3=0 items=0 ppid=4234 pid=4401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.386000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { bpf } for pid=4401 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffda7da9be0 a2=28 a3=0 items=0 ppid=4234 pid=4401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.386000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { bpf } for pid=4401 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffda7da9c00 a2=28 a3=0 items=0 ppid=4234 pid=4401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.386000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { bpf } for pid=4401 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffda7da9bd0 a2=28 a3=0 items=0 ppid=4234 pid=4401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.386000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { bpf } for pid=4401 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffda7da9c40 a2=28 a3=0 items=0 ppid=4234 pid=4401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.386000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { bpf } for pid=4401 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { bpf } for pid=4401 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { perfmon } for pid=4401 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { perfmon } for pid=4401 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { perfmon } for pid=4401 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { perfmon } for pid=4401 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { perfmon } for pid=4401 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { bpf } for pid=4401 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { bpf } for pid=4401 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit: BPF prog-id=154 op=LOAD Nov 1 00:44:36.386000 audit[4401]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffda7da9ab0 a2=94 a3=0 items=0 ppid=4234 pid=4401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.386000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:44:36.386000 audit: BPF prog-id=154 op=UNLOAD Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { bpf } for pid=4401 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffda7da9aa0 a2=50 a3=2800 items=0 ppid=4234 pid=4401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.386000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { bpf } for pid=4401 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffda7da9aa0 a2=50 a3=2800 items=0 ppid=4234 pid=4401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.386000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { bpf } for pid=4401 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { bpf } for pid=4401 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { bpf } for pid=4401 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { perfmon } for pid=4401 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { perfmon } for pid=4401 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { perfmon } for pid=4401 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { perfmon } for pid=4401 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { perfmon } for pid=4401 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { bpf } for pid=4401 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { bpf } for pid=4401 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit: BPF prog-id=155 op=LOAD Nov 1 00:44:36.386000 audit[4401]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffda7da92c0 a2=94 a3=2 items=0 ppid=4234 pid=4401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.386000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:44:36.386000 audit: BPF prog-id=155 op=UNLOAD Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { bpf } for pid=4401 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { bpf } for pid=4401 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { bpf } for pid=4401 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { perfmon } for pid=4401 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { perfmon } for pid=4401 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { perfmon } for pid=4401 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { perfmon } for pid=4401 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { perfmon } for pid=4401 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { bpf } for pid=4401 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit[4401]: AVC avc: denied { bpf } for pid=4401 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.386000 audit: BPF prog-id=156 op=LOAD Nov 1 00:44:36.386000 audit[4401]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffda7da93c0 a2=94 a3=30 items=0 ppid=4234 pid=4401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.386000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:44:36.387000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.387000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.387000 audit[4405]: AVC avc: denied { perfmon } for pid=4405 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.387000 audit[4405]: AVC avc: denied { perfmon } for pid=4405 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.387000 audit[4405]: AVC avc: denied { perfmon } for pid=4405 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.387000 audit[4405]: AVC avc: denied { perfmon } for pid=4405 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.387000 audit[4405]: AVC avc: denied { perfmon } for pid=4405 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.387000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.387000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.387000 audit: BPF prog-id=157 op=LOAD Nov 1 00:44:36.387000 audit[4405]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe79dc9c20 a2=98 a3=0 items=0 ppid=4234 pid=4405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.387000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:36.387000 audit: BPF prog-id=157 op=UNLOAD Nov 1 00:44:36.387000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.387000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.387000 audit[4405]: AVC avc: denied { perfmon } for pid=4405 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.387000 audit[4405]: AVC avc: denied { perfmon } for pid=4405 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.387000 audit[4405]: AVC avc: denied { perfmon } for pid=4405 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.387000 audit[4405]: AVC avc: denied { perfmon } for pid=4405 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.387000 audit[4405]: AVC avc: denied { perfmon } for pid=4405 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.387000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.387000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.387000 audit: BPF prog-id=158 op=LOAD Nov 1 00:44:36.387000 audit[4405]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe79dc9a10 a2=94 a3=54428f items=0 ppid=4234 pid=4405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.387000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:36.388000 audit: BPF prog-id=158 op=UNLOAD Nov 1 00:44:36.388000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.388000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.388000 audit[4405]: AVC avc: denied { perfmon } for pid=4405 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.388000 audit[4405]: AVC avc: denied { perfmon } for pid=4405 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.388000 audit[4405]: AVC avc: denied { perfmon } for pid=4405 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.388000 audit[4405]: AVC avc: denied { perfmon } for pid=4405 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.388000 audit[4405]: AVC avc: denied { perfmon } for pid=4405 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.388000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.388000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.388000 audit: BPF prog-id=159 op=LOAD Nov 1 00:44:36.388000 audit[4405]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe79dc9a40 a2=94 a3=2 items=0 ppid=4234 pid=4405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.388000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:36.388000 audit: BPF prog-id=159 op=UNLOAD Nov 1 00:44:36.476000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.476000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.476000 audit[4405]: AVC avc: denied { perfmon } for pid=4405 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.476000 audit[4405]: AVC avc: denied { perfmon } for pid=4405 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.476000 audit[4405]: AVC avc: denied { perfmon } for pid=4405 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.476000 audit[4405]: AVC avc: denied { perfmon } for pid=4405 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.476000 audit[4405]: AVC avc: denied { perfmon } for pid=4405 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.476000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.476000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.476000 audit: BPF prog-id=160 op=LOAD Nov 1 00:44:36.476000 audit[4405]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe79dc9900 a2=94 a3=1 items=0 ppid=4234 pid=4405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.476000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:36.476000 audit: BPF prog-id=160 op=UNLOAD Nov 1 00:44:36.476000 audit[4405]: AVC avc: denied { perfmon } for pid=4405 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.476000 audit[4405]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffe79dc99d0 a2=50 a3=7ffe79dc9ab0 items=0 ppid=4234 pid=4405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.476000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:36.483000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.483000 audit[4405]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe79dc9910 a2=28 a3=0 items=0 ppid=4234 pid=4405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.483000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:36.483000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.483000 audit[4405]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe79dc9940 a2=28 a3=0 items=0 ppid=4234 pid=4405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.483000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:36.483000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.483000 audit[4405]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe79dc9850 a2=28 a3=0 items=0 ppid=4234 pid=4405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.483000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:36.483000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.483000 audit[4405]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe79dc9960 a2=28 a3=0 items=0 ppid=4234 pid=4405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.483000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:36.483000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.483000 audit[4405]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe79dc9940 a2=28 a3=0 items=0 ppid=4234 pid=4405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.483000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:36.483000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.483000 audit[4405]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe79dc9930 a2=28 a3=0 items=0 ppid=4234 pid=4405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.483000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:36.483000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.483000 audit[4405]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe79dc9960 a2=28 a3=0 items=0 ppid=4234 pid=4405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.483000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:36.483000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.483000 audit[4405]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe79dc9940 a2=28 a3=0 items=0 ppid=4234 pid=4405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.483000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:36.483000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.483000 audit[4405]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe79dc9960 a2=28 a3=0 items=0 ppid=4234 pid=4405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.483000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:36.483000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.483000 audit[4405]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe79dc9930 a2=28 a3=0 items=0 ppid=4234 pid=4405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.483000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:36.483000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.483000 audit[4405]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe79dc99a0 a2=28 a3=0 items=0 ppid=4234 pid=4405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.483000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:36.483000 audit[4405]: AVC avc: denied { perfmon } for pid=4405 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.483000 audit[4405]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe79dc9750 a2=50 a3=1 items=0 ppid=4234 pid=4405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.483000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:36.483000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.483000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.483000 audit[4405]: AVC avc: denied { perfmon } for pid=4405 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.483000 audit[4405]: AVC avc: denied { perfmon } for pid=4405 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.483000 audit[4405]: AVC avc: denied { perfmon } for pid=4405 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.483000 audit[4405]: AVC avc: denied { perfmon } for pid=4405 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.483000 audit[4405]: AVC avc: denied { perfmon } for pid=4405 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.483000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.483000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.483000 audit: BPF prog-id=161 op=LOAD Nov 1 00:44:36.483000 audit[4405]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe79dc9750 a2=94 a3=5 items=0 ppid=4234 pid=4405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.483000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:36.483000 audit: BPF prog-id=161 op=UNLOAD Nov 1 00:44:36.483000 audit[4405]: AVC avc: denied { perfmon } for pid=4405 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.483000 audit[4405]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe79dc9800 a2=50 a3=1 items=0 ppid=4234 pid=4405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.483000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:36.483000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.483000 audit[4405]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffe79dc9920 a2=4 a3=38 items=0 ppid=4234 pid=4405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.483000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:36.483000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.483000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.483000 audit[4405]: AVC avc: denied { perfmon } for pid=4405 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.483000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.483000 audit[4405]: AVC avc: denied { perfmon } for pid=4405 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.483000 audit[4405]: AVC avc: denied { perfmon } for pid=4405 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.483000 audit[4405]: AVC avc: denied { perfmon } for pid=4405 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.483000 audit[4405]: AVC avc: denied { perfmon } for pid=4405 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.483000 audit[4405]: AVC avc: denied { perfmon } for pid=4405 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.483000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.483000 audit[4405]: AVC avc: denied { confidentiality } for pid=4405 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 00:44:36.483000 audit[4405]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe79dc9970 a2=94 a3=6 items=0 ppid=4234 pid=4405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.483000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:36.483000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.483000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.483000 audit[4405]: AVC avc: denied { perfmon } for pid=4405 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.483000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.483000 audit[4405]: AVC avc: denied { perfmon } for pid=4405 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.483000 audit[4405]: AVC avc: denied { perfmon } for pid=4405 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.483000 audit[4405]: AVC avc: denied { perfmon } for pid=4405 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.483000 audit[4405]: AVC avc: denied { perfmon } for pid=4405 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.483000 audit[4405]: AVC avc: denied { perfmon } for pid=4405 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.483000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.483000 audit[4405]: AVC avc: denied { confidentiality } for pid=4405 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 00:44:36.483000 audit[4405]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe79dc9120 a2=94 a3=88 items=0 ppid=4234 pid=4405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.483000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:36.484000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.484000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.484000 audit[4405]: AVC avc: denied { perfmon } for pid=4405 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.484000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.484000 audit[4405]: AVC avc: denied { perfmon } for pid=4405 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.484000 audit[4405]: AVC avc: denied { perfmon } for pid=4405 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.484000 audit[4405]: AVC avc: denied { perfmon } for pid=4405 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.484000 audit[4405]: AVC avc: denied { perfmon } for pid=4405 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.484000 audit[4405]: AVC avc: denied { perfmon } for pid=4405 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.484000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.484000 audit[4405]: AVC avc: denied { confidentiality } for pid=4405 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 00:44:36.484000 audit[4405]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe79dc9120 a2=94 a3=88 items=0 ppid=4234 pid=4405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.484000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:36.484000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.484000 audit[4405]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe79dcab50 a2=10 a3=f8f00800 items=0 ppid=4234 pid=4405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.484000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:36.484000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.484000 audit[4405]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe79dca9f0 a2=10 a3=3 items=0 ppid=4234 pid=4405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.484000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:36.484000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.484000 audit[4405]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe79dca990 a2=10 a3=3 items=0 ppid=4234 pid=4405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.484000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:36.484000 audit[4405]: AVC avc: denied { bpf } for pid=4405 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:36.484000 audit[4405]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe79dca990 a2=10 a3=7 items=0 ppid=4234 pid=4405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.484000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:36.493000 audit: BPF prog-id=156 op=UNLOAD Nov 1 00:44:36.527000 audit[4463]: NETFILTER_CFG table=mangle:103 family=2 entries=16 op=nft_register_chain pid=4463 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:44:36.527000 audit[4463]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffc94d15a40 a2=0 a3=7ffc94d15a2c items=0 ppid=4234 pid=4463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.527000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:44:36.530624 kubelet[2501]: I1101 00:44:36.530604 2501 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6daa08f9-a020-4d78-b34b-6c75d3bd1afe" path="/var/lib/kubelet/pods/6daa08f9-a020-4d78-b34b-6c75d3bd1afe/volumes" Nov 1 00:44:36.542000 audit[4462]: NETFILTER_CFG table=nat:104 family=2 entries=15 op=nft_register_chain pid=4462 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:44:36.542000 audit[4462]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fff9d2ca280 a2=0 a3=7fff9d2ca26c items=0 ppid=4234 pid=4462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.542000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:44:36.546000 audit[4461]: NETFILTER_CFG table=raw:105 family=2 entries=21 op=nft_register_chain pid=4461 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:44:36.546000 audit[4461]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffdf54d0900 a2=0 a3=7ffdf54d08ec items=0 ppid=4234 pid=4461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.546000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:44:36.560000 audit[4466]: NETFILTER_CFG table=filter:106 family=2 entries=94 op=nft_register_chain pid=4466 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:44:36.560000 audit[4466]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffdb3ad4b00 a2=0 a3=55d706a85000 items=0 ppid=4234 pid=4466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:36.560000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:44:36.578024 env[1565]: time="2025-11-01T00:44:36.577959788Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:36.578395 env[1565]: time="2025-11-01T00:44:36.578328056Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:44:36.578566 kubelet[2501]: E1101 00:44:36.578538 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:44:36.578645 kubelet[2501]: E1101 00:44:36.578580 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:44:36.578732 kubelet[2501]: E1101 00:44:36.578699 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:03c5ae372c8f45a08ebfcb08205f19e3,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-slg5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84c874fc74-qqjt8_calico-system(30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:36.580436 env[1565]: time="2025-11-01T00:44:36.580409500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:44:36.934888 env[1565]: time="2025-11-01T00:44:36.934779404Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:36.936047 env[1565]: time="2025-11-01T00:44:36.935951631Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:44:36.936478 kubelet[2501]: E1101 00:44:36.936405 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:44:36.937183 kubelet[2501]: E1101 00:44:36.936524 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:44:36.937307 kubelet[2501]: E1101 00:44:36.936768 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-slg5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84c874fc74-qqjt8_calico-system(30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:36.938262 kubelet[2501]: E1101 00:44:36.938176 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c874fc74-qqjt8" podUID="30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d" Nov 1 00:44:37.649155 kubelet[2501]: E1101 00:44:37.649024 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c874fc74-qqjt8" podUID="30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d" Nov 1 00:44:37.667000 audit[4478]: NETFILTER_CFG table=filter:107 family=2 entries=20 op=nft_register_rule pid=4478 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:37.667000 audit[4478]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc2eaf47a0 a2=0 a3=7ffc2eaf478c items=0 ppid=2683 pid=4478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:37.667000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:37.688000 audit[4478]: NETFILTER_CFG table=nat:108 family=2 entries=14 op=nft_register_rule pid=4478 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:37.688000 audit[4478]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc2eaf47a0 a2=0 a3=0 items=0 ppid=2683 pid=4478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:37.688000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:37.822806 systemd-networkd[1324]: cali2a6aaa6410c: Gained IPv6LL Nov 1 00:44:38.335776 systemd-networkd[1324]: vxlan.calico: Gained IPv6LL Nov 1 00:44:40.529488 env[1565]: time="2025-11-01T00:44:40.529464635Z" level=info msg="StopPodSandbox for \"3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c\"" Nov 1 00:44:40.585925 env[1565]: 2025-11-01 00:44:40.555 [INFO][4493] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c" Nov 1 00:44:40.585925 env[1565]: 2025-11-01 00:44:40.555 [INFO][4493] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c" iface="eth0" netns="/var/run/netns/cni-b4e49d43-be0f-4424-ef55-d8a4f34a78ab" Nov 1 00:44:40.585925 env[1565]: 2025-11-01 00:44:40.555 [INFO][4493] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c" iface="eth0" netns="/var/run/netns/cni-b4e49d43-be0f-4424-ef55-d8a4f34a78ab" Nov 1 00:44:40.585925 env[1565]: 2025-11-01 00:44:40.555 [INFO][4493] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c" iface="eth0" netns="/var/run/netns/cni-b4e49d43-be0f-4424-ef55-d8a4f34a78ab" Nov 1 00:44:40.585925 env[1565]: 2025-11-01 00:44:40.555 [INFO][4493] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c" Nov 1 00:44:40.585925 env[1565]: 2025-11-01 00:44:40.555 [INFO][4493] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c" Nov 1 00:44:40.585925 env[1565]: 2025-11-01 00:44:40.572 [INFO][4512] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c" HandleID="k8s-pod-network.3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c" Workload="ci--3510.3.8--n--3bc793b712-k8s-calico--kube--controllers--859bf66984--8h8hn-eth0" Nov 1 00:44:40.585925 env[1565]: 2025-11-01 00:44:40.572 [INFO][4512] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:40.585925 env[1565]: 2025-11-01 00:44:40.572 [INFO][4512] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:40.585925 env[1565]: 2025-11-01 00:44:40.580 [WARNING][4512] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c" HandleID="k8s-pod-network.3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c" Workload="ci--3510.3.8--n--3bc793b712-k8s-calico--kube--controllers--859bf66984--8h8hn-eth0" Nov 1 00:44:40.585925 env[1565]: 2025-11-01 00:44:40.580 [INFO][4512] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c" HandleID="k8s-pod-network.3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c" Workload="ci--3510.3.8--n--3bc793b712-k8s-calico--kube--controllers--859bf66984--8h8hn-eth0" Nov 1 00:44:40.585925 env[1565]: 2025-11-01 00:44:40.582 [INFO][4512] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:40.585925 env[1565]: 2025-11-01 00:44:40.584 [INFO][4493] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c" Nov 1 00:44:40.587056 env[1565]: time="2025-11-01T00:44:40.586065349Z" level=info msg="TearDown network for sandbox \"3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c\" successfully" Nov 1 00:44:40.587056 env[1565]: time="2025-11-01T00:44:40.586101837Z" level=info msg="StopPodSandbox for \"3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c\" returns successfully" Nov 1 00:44:40.587056 env[1565]: time="2025-11-01T00:44:40.586710091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-859bf66984-8h8hn,Uid:9cd09edd-44db-4b31-b369-b622badfedc3,Namespace:calico-system,Attempt:1,}" Nov 1 00:44:40.589976 systemd[1]: run-netns-cni\x2db4e49d43\x2dbe0f\x2d4424\x2def55\x2dd8a4f34a78ab.mount: Deactivated successfully. Nov 1 00:44:40.653362 systemd-networkd[1324]: calid098c8d73a2: Link UP Nov 1 00:44:40.708365 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 00:44:40.708416 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calid098c8d73a2: link becomes ready Nov 1 00:44:40.708382 systemd-networkd[1324]: calid098c8d73a2: Gained carrier Nov 1 00:44:40.715009 env[1565]: 2025-11-01 00:44:40.610 [INFO][4528] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--3bc793b712-k8s-calico--kube--controllers--859bf66984--8h8hn-eth0 calico-kube-controllers-859bf66984- calico-system 9cd09edd-44db-4b31-b369-b622badfedc3 936 0 2025-11-01 00:44:18 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:859bf66984 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3510.3.8-n-3bc793b712 calico-kube-controllers-859bf66984-8h8hn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid098c8d73a2 [] [] }} ContainerID="49aa6e2f8bd03af2294a801ea17da1a9eca3955df407e8f03484dcae85cdbc37" Namespace="calico-system" Pod="calico-kube-controllers-859bf66984-8h8hn" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-calico--kube--controllers--859bf66984--8h8hn-" Nov 1 00:44:40.715009 env[1565]: 2025-11-01 00:44:40.610 [INFO][4528] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="49aa6e2f8bd03af2294a801ea17da1a9eca3955df407e8f03484dcae85cdbc37" Namespace="calico-system" Pod="calico-kube-controllers-859bf66984-8h8hn" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-calico--kube--controllers--859bf66984--8h8hn-eth0" Nov 1 00:44:40.715009 env[1565]: 2025-11-01 00:44:40.626 [INFO][4550] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="49aa6e2f8bd03af2294a801ea17da1a9eca3955df407e8f03484dcae85cdbc37" HandleID="k8s-pod-network.49aa6e2f8bd03af2294a801ea17da1a9eca3955df407e8f03484dcae85cdbc37" Workload="ci--3510.3.8--n--3bc793b712-k8s-calico--kube--controllers--859bf66984--8h8hn-eth0" Nov 1 00:44:40.715009 env[1565]: 2025-11-01 00:44:40.626 [INFO][4550] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="49aa6e2f8bd03af2294a801ea17da1a9eca3955df407e8f03484dcae85cdbc37" HandleID="k8s-pod-network.49aa6e2f8bd03af2294a801ea17da1a9eca3955df407e8f03484dcae85cdbc37" Workload="ci--3510.3.8--n--3bc793b712-k8s-calico--kube--controllers--859bf66984--8h8hn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f6b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-3bc793b712", "pod":"calico-kube-controllers-859bf66984-8h8hn", "timestamp":"2025-11-01 00:44:40.626719468 +0000 UTC"}, Hostname:"ci-3510.3.8-n-3bc793b712", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:44:40.715009 env[1565]: 2025-11-01 00:44:40.626 [INFO][4550] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:40.715009 env[1565]: 2025-11-01 00:44:40.626 [INFO][4550] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:40.715009 env[1565]: 2025-11-01 00:44:40.626 [INFO][4550] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-3bc793b712' Nov 1 00:44:40.715009 env[1565]: 2025-11-01 00:44:40.632 [INFO][4550] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.49aa6e2f8bd03af2294a801ea17da1a9eca3955df407e8f03484dcae85cdbc37" host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:40.715009 env[1565]: 2025-11-01 00:44:40.636 [INFO][4550] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:40.715009 env[1565]: 2025-11-01 00:44:40.639 [INFO][4550] ipam/ipam.go 511: Trying affinity for 192.168.3.64/26 host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:40.715009 env[1565]: 2025-11-01 00:44:40.641 [INFO][4550] ipam/ipam.go 158: Attempting to load block cidr=192.168.3.64/26 host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:40.715009 env[1565]: 2025-11-01 00:44:40.642 [INFO][4550] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.3.64/26 host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:40.715009 env[1565]: 2025-11-01 00:44:40.643 [INFO][4550] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.3.64/26 handle="k8s-pod-network.49aa6e2f8bd03af2294a801ea17da1a9eca3955df407e8f03484dcae85cdbc37" host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:40.715009 env[1565]: 2025-11-01 00:44:40.644 [INFO][4550] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.49aa6e2f8bd03af2294a801ea17da1a9eca3955df407e8f03484dcae85cdbc37 Nov 1 00:44:40.715009 env[1565]: 2025-11-01 00:44:40.647 [INFO][4550] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.3.64/26 handle="k8s-pod-network.49aa6e2f8bd03af2294a801ea17da1a9eca3955df407e8f03484dcae85cdbc37" host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:40.715009 env[1565]: 2025-11-01 00:44:40.650 [INFO][4550] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.3.66/26] block=192.168.3.64/26 handle="k8s-pod-network.49aa6e2f8bd03af2294a801ea17da1a9eca3955df407e8f03484dcae85cdbc37" host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:40.715009 env[1565]: 2025-11-01 00:44:40.650 [INFO][4550] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.3.66/26] handle="k8s-pod-network.49aa6e2f8bd03af2294a801ea17da1a9eca3955df407e8f03484dcae85cdbc37" host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:40.715009 env[1565]: 2025-11-01 00:44:40.650 [INFO][4550] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:40.715009 env[1565]: 2025-11-01 00:44:40.650 [INFO][4550] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.3.66/26] IPv6=[] ContainerID="49aa6e2f8bd03af2294a801ea17da1a9eca3955df407e8f03484dcae85cdbc37" HandleID="k8s-pod-network.49aa6e2f8bd03af2294a801ea17da1a9eca3955df407e8f03484dcae85cdbc37" Workload="ci--3510.3.8--n--3bc793b712-k8s-calico--kube--controllers--859bf66984--8h8hn-eth0" Nov 1 00:44:40.715533 env[1565]: 2025-11-01 00:44:40.652 [INFO][4528] cni-plugin/k8s.go 418: Populated endpoint ContainerID="49aa6e2f8bd03af2294a801ea17da1a9eca3955df407e8f03484dcae85cdbc37" Namespace="calico-system" Pod="calico-kube-controllers-859bf66984-8h8hn" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-calico--kube--controllers--859bf66984--8h8hn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--3bc793b712-k8s-calico--kube--controllers--859bf66984--8h8hn-eth0", GenerateName:"calico-kube-controllers-859bf66984-", Namespace:"calico-system", SelfLink:"", UID:"9cd09edd-44db-4b31-b369-b622badfedc3", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 44, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"859bf66984", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-3bc793b712", ContainerID:"", Pod:"calico-kube-controllers-859bf66984-8h8hn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.3.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid098c8d73a2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:44:40.715533 env[1565]: 2025-11-01 00:44:40.652 [INFO][4528] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.3.66/32] ContainerID="49aa6e2f8bd03af2294a801ea17da1a9eca3955df407e8f03484dcae85cdbc37" Namespace="calico-system" Pod="calico-kube-controllers-859bf66984-8h8hn" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-calico--kube--controllers--859bf66984--8h8hn-eth0" Nov 1 00:44:40.715533 env[1565]: 2025-11-01 00:44:40.652 [INFO][4528] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid098c8d73a2 ContainerID="49aa6e2f8bd03af2294a801ea17da1a9eca3955df407e8f03484dcae85cdbc37" Namespace="calico-system" Pod="calico-kube-controllers-859bf66984-8h8hn" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-calico--kube--controllers--859bf66984--8h8hn-eth0" Nov 1 00:44:40.715533 env[1565]: 2025-11-01 00:44:40.708 [INFO][4528] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="49aa6e2f8bd03af2294a801ea17da1a9eca3955df407e8f03484dcae85cdbc37" Namespace="calico-system" Pod="calico-kube-controllers-859bf66984-8h8hn" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-calico--kube--controllers--859bf66984--8h8hn-eth0" Nov 1 00:44:40.715533 env[1565]: 2025-11-01 00:44:40.708 [INFO][4528] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="49aa6e2f8bd03af2294a801ea17da1a9eca3955df407e8f03484dcae85cdbc37" Namespace="calico-system" Pod="calico-kube-controllers-859bf66984-8h8hn" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-calico--kube--controllers--859bf66984--8h8hn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--3bc793b712-k8s-calico--kube--controllers--859bf66984--8h8hn-eth0", GenerateName:"calico-kube-controllers-859bf66984-", Namespace:"calico-system", SelfLink:"", UID:"9cd09edd-44db-4b31-b369-b622badfedc3", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 44, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"859bf66984", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-3bc793b712", ContainerID:"49aa6e2f8bd03af2294a801ea17da1a9eca3955df407e8f03484dcae85cdbc37", Pod:"calico-kube-controllers-859bf66984-8h8hn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.3.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid098c8d73a2", MAC:"72:75:8a:38:5e:d7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:44:40.715533 env[1565]: 2025-11-01 00:44:40.714 [INFO][4528] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="49aa6e2f8bd03af2294a801ea17da1a9eca3955df407e8f03484dcae85cdbc37" Namespace="calico-system" Pod="calico-kube-controllers-859bf66984-8h8hn" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-calico--kube--controllers--859bf66984--8h8hn-eth0" Nov 1 00:44:40.719669 env[1565]: time="2025-11-01T00:44:40.719605004Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:44:40.719669 env[1565]: time="2025-11-01T00:44:40.719626297Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:44:40.719669 env[1565]: time="2025-11-01T00:44:40.719633214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:44:40.719784 env[1565]: time="2025-11-01T00:44:40.719692159Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/49aa6e2f8bd03af2294a801ea17da1a9eca3955df407e8f03484dcae85cdbc37 pid=4588 runtime=io.containerd.runc.v2 Nov 1 00:44:40.720000 audit[4593]: NETFILTER_CFG table=filter:109 family=2 entries=36 op=nft_register_chain pid=4593 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:44:40.729538 systemd[1]: Started cri-containerd-49aa6e2f8bd03af2294a801ea17da1a9eca3955df407e8f03484dcae85cdbc37.scope. Nov 1 00:44:40.746272 kernel: kauditd_printk_skb: 653 callbacks suppressed Nov 1 00:44:40.746314 kernel: audit: type=1325 audit(1761957880.720:1125): table=filter:109 family=2 entries=36 op=nft_register_chain pid=4593 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:44:40.720000 audit[4593]: SYSCALL arch=c000003e syscall=46 success=yes exit=19576 a0=3 a1=7fff52202940 a2=0 a3=7fff5220292c items=0 ppid=4234 pid=4593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:40.803553 kernel: audit: type=1300 audit(1761957880.720:1125): arch=c000003e syscall=46 success=yes exit=19576 a0=3 a1=7fff52202940 a2=0 a3=7fff5220292c items=0 ppid=4234 pid=4593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:40.720000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:44:40.897503 kernel: audit: type=1327 audit(1761957880.720:1125): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:44:40.810000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.018064 kernel: audit: type=1400 audit(1761957880.810:1126): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.018100 kernel: audit: type=1400 audit(1761957880.810:1127): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:40.810000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:40.810000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.140581 kernel: audit: type=1400 audit(1761957880.810:1128): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.140617 kernel: audit: type=1400 audit(1761957880.810:1129): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:40.810000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.202170 kernel: audit: type=1400 audit(1761957880.810:1130): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:40.810000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.263790 kernel: audit: type=1400 audit(1761957880.810:1131): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:40.810000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.325727 kernel: audit: type=1400 audit(1761957880.810:1132): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:40.810000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:40.810000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:40.810000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:40.956000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:40.956000 audit: BPF prog-id=162 op=LOAD Nov 1 00:44:40.956000 audit[4599]: AVC avc: denied { bpf } for pid=4599 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:40.956000 audit[4599]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=4588 pid=4599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:40.956000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439616136653266386264303361663232393461383031656131376461 Nov 1 00:44:40.956000 audit[4599]: AVC avc: denied { perfmon } for pid=4599 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:40.956000 audit[4599]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=4588 pid=4599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:40.956000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439616136653266386264303361663232393461383031656131376461 Nov 1 00:44:40.956000 audit[4599]: AVC avc: denied { bpf } for pid=4599 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:40.956000 audit[4599]: AVC avc: denied { bpf } for pid=4599 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:40.956000 audit[4599]: AVC avc: denied { bpf } for pid=4599 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:40.956000 audit[4599]: AVC avc: denied { perfmon } for pid=4599 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:40.956000 audit[4599]: AVC avc: denied { perfmon } for pid=4599 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:40.956000 audit[4599]: AVC avc: denied { perfmon } for pid=4599 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:40.956000 audit[4599]: AVC avc: denied { perfmon } for pid=4599 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:40.956000 audit[4599]: AVC avc: denied { perfmon } for pid=4599 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:40.956000 audit[4599]: AVC avc: denied { bpf } for pid=4599 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:40.956000 audit[4599]: AVC avc: denied { bpf } for pid=4599 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:40.956000 audit: BPF prog-id=163 op=LOAD Nov 1 00:44:40.956000 audit[4599]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c00029b280 items=0 ppid=4588 pid=4599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:40.956000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439616136653266386264303361663232393461383031656131376461 Nov 1 00:44:41.078000 audit[4599]: AVC avc: denied { bpf } for pid=4599 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.078000 audit[4599]: AVC avc: denied { bpf } for pid=4599 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.078000 audit[4599]: AVC avc: denied { perfmon } for pid=4599 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.078000 audit[4599]: AVC avc: denied { perfmon } for pid=4599 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.078000 audit[4599]: AVC avc: denied { perfmon } for pid=4599 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.078000 audit[4599]: AVC avc: denied { perfmon } for pid=4599 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.078000 audit[4599]: AVC avc: denied { perfmon } for pid=4599 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.078000 audit[4599]: AVC avc: denied { bpf } for pid=4599 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.078000 audit[4599]: AVC avc: denied { bpf } for pid=4599 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.078000 audit: BPF prog-id=164 op=LOAD Nov 1 00:44:41.078000 audit[4599]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c00029b2c8 items=0 ppid=4588 pid=4599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:41.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439616136653266386264303361663232393461383031656131376461 Nov 1 00:44:41.201000 audit: BPF prog-id=164 op=UNLOAD Nov 1 00:44:41.201000 audit: BPF prog-id=163 op=UNLOAD Nov 1 00:44:41.201000 audit[4599]: AVC avc: denied { bpf } for pid=4599 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.201000 audit[4599]: AVC avc: denied { bpf } for pid=4599 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.201000 audit[4599]: AVC avc: denied { bpf } for pid=4599 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.201000 audit[4599]: AVC avc: denied { perfmon } for pid=4599 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.201000 audit[4599]: AVC avc: denied { perfmon } for pid=4599 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.201000 audit[4599]: AVC avc: denied { perfmon } for pid=4599 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.201000 audit[4599]: AVC avc: denied { perfmon } for pid=4599 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.201000 audit[4599]: AVC avc: denied { perfmon } for pid=4599 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.201000 audit[4599]: AVC avc: denied { bpf } for pid=4599 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.201000 audit[4599]: AVC avc: denied { bpf } for pid=4599 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.201000 audit: BPF prog-id=165 op=LOAD Nov 1 00:44:41.201000 audit[4599]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c00029b6d8 items=0 ppid=4588 pid=4599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:41.201000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439616136653266386264303361663232393461383031656131376461 Nov 1 00:44:41.403701 env[1565]: time="2025-11-01T00:44:41.403638671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-859bf66984-8h8hn,Uid:9cd09edd-44db-4b31-b369-b622badfedc3,Namespace:calico-system,Attempt:1,} returns sandbox id \"49aa6e2f8bd03af2294a801ea17da1a9eca3955df407e8f03484dcae85cdbc37\"" Nov 1 00:44:41.404540 env[1565]: time="2025-11-01T00:44:41.404520253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:44:41.530585 env[1565]: time="2025-11-01T00:44:41.530479421Z" level=info msg="StopPodSandbox for \"87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408\"" Nov 1 00:44:41.530585 env[1565]: time="2025-11-01T00:44:41.530525977Z" level=info msg="StopPodSandbox for \"f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0\"" Nov 1 00:44:41.582833 env[1565]: 2025-11-01 00:44:41.565 [INFO][4645] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0" Nov 1 00:44:41.582833 env[1565]: 2025-11-01 00:44:41.566 [INFO][4645] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0" iface="eth0" netns="/var/run/netns/cni-b3a051a8-6e7a-9c2a-2e48-17b9813c1f10" Nov 1 00:44:41.582833 env[1565]: 2025-11-01 00:44:41.566 [INFO][4645] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0" iface="eth0" netns="/var/run/netns/cni-b3a051a8-6e7a-9c2a-2e48-17b9813c1f10" Nov 1 00:44:41.582833 env[1565]: 2025-11-01 00:44:41.566 [INFO][4645] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0" iface="eth0" netns="/var/run/netns/cni-b3a051a8-6e7a-9c2a-2e48-17b9813c1f10" Nov 1 00:44:41.582833 env[1565]: 2025-11-01 00:44:41.566 [INFO][4645] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0" Nov 1 00:44:41.582833 env[1565]: 2025-11-01 00:44:41.566 [INFO][4645] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0" Nov 1 00:44:41.582833 env[1565]: 2025-11-01 00:44:41.575 [INFO][4684] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0" HandleID="k8s-pod-network.f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0" Workload="ci--3510.3.8--n--3bc793b712-k8s-goldmane--666569f655--bxfpm-eth0" Nov 1 00:44:41.582833 env[1565]: 2025-11-01 00:44:41.575 [INFO][4684] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:41.582833 env[1565]: 2025-11-01 00:44:41.575 [INFO][4684] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:41.582833 env[1565]: 2025-11-01 00:44:41.580 [WARNING][4684] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0" HandleID="k8s-pod-network.f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0" Workload="ci--3510.3.8--n--3bc793b712-k8s-goldmane--666569f655--bxfpm-eth0" Nov 1 00:44:41.582833 env[1565]: 2025-11-01 00:44:41.580 [INFO][4684] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0" HandleID="k8s-pod-network.f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0" Workload="ci--3510.3.8--n--3bc793b712-k8s-goldmane--666569f655--bxfpm-eth0" Nov 1 00:44:41.582833 env[1565]: 2025-11-01 00:44:41.581 [INFO][4684] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:41.582833 env[1565]: 2025-11-01 00:44:41.582 [INFO][4645] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0" Nov 1 00:44:41.583199 env[1565]: time="2025-11-01T00:44:41.582882864Z" level=info msg="TearDown network for sandbox \"f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0\" successfully" Nov 1 00:44:41.583199 env[1565]: time="2025-11-01T00:44:41.582905141Z" level=info msg="StopPodSandbox for \"f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0\" returns successfully" Nov 1 00:44:41.583315 env[1565]: time="2025-11-01T00:44:41.583301133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-bxfpm,Uid:135646f8-0c66-45b5-80ce-9bb45c825de7,Namespace:calico-system,Attempt:1,}" Nov 1 00:44:41.587407 env[1565]: 2025-11-01 00:44:41.565 [INFO][4646] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408" Nov 1 00:44:41.587407 env[1565]: 2025-11-01 00:44:41.565 [INFO][4646] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408" iface="eth0" netns="/var/run/netns/cni-8ff13757-1e63-defe-8d13-bf76c4159698" Nov 1 00:44:41.587407 env[1565]: 2025-11-01 00:44:41.565 [INFO][4646] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408" iface="eth0" netns="/var/run/netns/cni-8ff13757-1e63-defe-8d13-bf76c4159698" Nov 1 00:44:41.587407 env[1565]: 2025-11-01 00:44:41.565 [INFO][4646] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408" iface="eth0" netns="/var/run/netns/cni-8ff13757-1e63-defe-8d13-bf76c4159698" Nov 1 00:44:41.587407 env[1565]: 2025-11-01 00:44:41.565 [INFO][4646] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408" Nov 1 00:44:41.587407 env[1565]: 2025-11-01 00:44:41.565 [INFO][4646] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408" Nov 1 00:44:41.587407 env[1565]: 2025-11-01 00:44:41.575 [INFO][4681] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408" HandleID="k8s-pod-network.87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408" Workload="ci--3510.3.8--n--3bc793b712-k8s-csi--node--driver--qpkjg-eth0" Nov 1 00:44:41.587407 env[1565]: 2025-11-01 00:44:41.575 [INFO][4681] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:41.587407 env[1565]: 2025-11-01 00:44:41.581 [INFO][4681] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:41.587407 env[1565]: 2025-11-01 00:44:41.584 [WARNING][4681] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408" HandleID="k8s-pod-network.87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408" Workload="ci--3510.3.8--n--3bc793b712-k8s-csi--node--driver--qpkjg-eth0" Nov 1 00:44:41.587407 env[1565]: 2025-11-01 00:44:41.584 [INFO][4681] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408" HandleID="k8s-pod-network.87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408" Workload="ci--3510.3.8--n--3bc793b712-k8s-csi--node--driver--qpkjg-eth0" Nov 1 00:44:41.587407 env[1565]: 2025-11-01 00:44:41.585 [INFO][4681] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:41.587407 env[1565]: 2025-11-01 00:44:41.586 [INFO][4646] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408" Nov 1 00:44:41.587842 env[1565]: time="2025-11-01T00:44:41.587460958Z" level=info msg="TearDown network for sandbox \"87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408\" successfully" Nov 1 00:44:41.587842 env[1565]: time="2025-11-01T00:44:41.587478657Z" level=info msg="StopPodSandbox for \"87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408\" returns successfully" Nov 1 00:44:41.587842 env[1565]: time="2025-11-01T00:44:41.587777912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qpkjg,Uid:ce15fc81-d33c-45b3-b08a-5d312fb076f0,Namespace:calico-system,Attempt:1,}" Nov 1 00:44:41.587709 systemd[1]: run-netns-cni\x2db3a051a8\x2d6e7a\x2d9c2a\x2d2e48\x2d17b9813c1f10.mount: Deactivated successfully. Nov 1 00:44:41.590406 systemd[1]: run-netns-cni\x2d8ff13757\x2d1e63\x2ddefe\x2d8d13\x2dbf76c4159698.mount: Deactivated successfully. Nov 1 00:44:41.641277 systemd-networkd[1324]: cali8050522b12a: Link UP Nov 1 00:44:41.669254 systemd-networkd[1324]: cali8050522b12a: Gained carrier Nov 1 00:44:41.669532 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali8050522b12a: link becomes ready Nov 1 00:44:41.676592 env[1565]: 2025-11-01 00:44:41.604 [INFO][4715] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--3bc793b712-k8s-goldmane--666569f655--bxfpm-eth0 goldmane-666569f655- calico-system 135646f8-0c66-45b5-80ce-9bb45c825de7 947 0 2025-11-01 00:44:16 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-3510.3.8-n-3bc793b712 goldmane-666569f655-bxfpm eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali8050522b12a [] [] }} ContainerID="f59ce6e00a8c9808b794b01fd1ffb6e1c5a2779e073f0a61a83cb5d80e2850ea" Namespace="calico-system" Pod="goldmane-666569f655-bxfpm" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-goldmane--666569f655--bxfpm-" Nov 1 00:44:41.676592 env[1565]: 2025-11-01 00:44:41.604 [INFO][4715] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f59ce6e00a8c9808b794b01fd1ffb6e1c5a2779e073f0a61a83cb5d80e2850ea" Namespace="calico-system" Pod="goldmane-666569f655-bxfpm" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-goldmane--666569f655--bxfpm-eth0" Nov 1 00:44:41.676592 env[1565]: 2025-11-01 00:44:41.617 [INFO][4759] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f59ce6e00a8c9808b794b01fd1ffb6e1c5a2779e073f0a61a83cb5d80e2850ea" HandleID="k8s-pod-network.f59ce6e00a8c9808b794b01fd1ffb6e1c5a2779e073f0a61a83cb5d80e2850ea" Workload="ci--3510.3.8--n--3bc793b712-k8s-goldmane--666569f655--bxfpm-eth0" Nov 1 00:44:41.676592 env[1565]: 2025-11-01 00:44:41.617 [INFO][4759] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f59ce6e00a8c9808b794b01fd1ffb6e1c5a2779e073f0a61a83cb5d80e2850ea" HandleID="k8s-pod-network.f59ce6e00a8c9808b794b01fd1ffb6e1c5a2779e073f0a61a83cb5d80e2850ea" Workload="ci--3510.3.8--n--3bc793b712-k8s-goldmane--666569f655--bxfpm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f630), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-3bc793b712", "pod":"goldmane-666569f655-bxfpm", "timestamp":"2025-11-01 00:44:41.617112163 +0000 UTC"}, Hostname:"ci-3510.3.8-n-3bc793b712", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:44:41.676592 env[1565]: 2025-11-01 00:44:41.617 [INFO][4759] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:41.676592 env[1565]: 2025-11-01 00:44:41.617 [INFO][4759] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:41.676592 env[1565]: 2025-11-01 00:44:41.617 [INFO][4759] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-3bc793b712' Nov 1 00:44:41.676592 env[1565]: 2025-11-01 00:44:41.622 [INFO][4759] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f59ce6e00a8c9808b794b01fd1ffb6e1c5a2779e073f0a61a83cb5d80e2850ea" host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:41.676592 env[1565]: 2025-11-01 00:44:41.626 [INFO][4759] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:41.676592 env[1565]: 2025-11-01 00:44:41.629 [INFO][4759] ipam/ipam.go 511: Trying affinity for 192.168.3.64/26 host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:41.676592 env[1565]: 2025-11-01 00:44:41.630 [INFO][4759] ipam/ipam.go 158: Attempting to load block cidr=192.168.3.64/26 host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:41.676592 env[1565]: 2025-11-01 00:44:41.632 [INFO][4759] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.3.64/26 host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:41.676592 env[1565]: 2025-11-01 00:44:41.632 [INFO][4759] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.3.64/26 handle="k8s-pod-network.f59ce6e00a8c9808b794b01fd1ffb6e1c5a2779e073f0a61a83cb5d80e2850ea" host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:41.676592 env[1565]: 2025-11-01 00:44:41.633 [INFO][4759] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f59ce6e00a8c9808b794b01fd1ffb6e1c5a2779e073f0a61a83cb5d80e2850ea Nov 1 00:44:41.676592 env[1565]: 2025-11-01 00:44:41.635 [INFO][4759] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.3.64/26 handle="k8s-pod-network.f59ce6e00a8c9808b794b01fd1ffb6e1c5a2779e073f0a61a83cb5d80e2850ea" host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:41.676592 env[1565]: 2025-11-01 00:44:41.639 [INFO][4759] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.3.67/26] block=192.168.3.64/26 handle="k8s-pod-network.f59ce6e00a8c9808b794b01fd1ffb6e1c5a2779e073f0a61a83cb5d80e2850ea" host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:41.676592 env[1565]: 2025-11-01 00:44:41.639 [INFO][4759] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.3.67/26] handle="k8s-pod-network.f59ce6e00a8c9808b794b01fd1ffb6e1c5a2779e073f0a61a83cb5d80e2850ea" host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:41.676592 env[1565]: 2025-11-01 00:44:41.639 [INFO][4759] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:41.676592 env[1565]: 2025-11-01 00:44:41.639 [INFO][4759] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.3.67/26] IPv6=[] ContainerID="f59ce6e00a8c9808b794b01fd1ffb6e1c5a2779e073f0a61a83cb5d80e2850ea" HandleID="k8s-pod-network.f59ce6e00a8c9808b794b01fd1ffb6e1c5a2779e073f0a61a83cb5d80e2850ea" Workload="ci--3510.3.8--n--3bc793b712-k8s-goldmane--666569f655--bxfpm-eth0" Nov 1 00:44:41.677067 env[1565]: 2025-11-01 00:44:41.640 [INFO][4715] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f59ce6e00a8c9808b794b01fd1ffb6e1c5a2779e073f0a61a83cb5d80e2850ea" Namespace="calico-system" Pod="goldmane-666569f655-bxfpm" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-goldmane--666569f655--bxfpm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--3bc793b712-k8s-goldmane--666569f655--bxfpm-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"135646f8-0c66-45b5-80ce-9bb45c825de7", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 44, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-3bc793b712", ContainerID:"", Pod:"goldmane-666569f655-bxfpm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.3.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8050522b12a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:44:41.677067 env[1565]: 2025-11-01 00:44:41.640 [INFO][4715] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.3.67/32] ContainerID="f59ce6e00a8c9808b794b01fd1ffb6e1c5a2779e073f0a61a83cb5d80e2850ea" Namespace="calico-system" Pod="goldmane-666569f655-bxfpm" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-goldmane--666569f655--bxfpm-eth0" Nov 1 00:44:41.677067 env[1565]: 2025-11-01 00:44:41.640 [INFO][4715] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8050522b12a ContainerID="f59ce6e00a8c9808b794b01fd1ffb6e1c5a2779e073f0a61a83cb5d80e2850ea" Namespace="calico-system" Pod="goldmane-666569f655-bxfpm" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-goldmane--666569f655--bxfpm-eth0" Nov 1 00:44:41.677067 env[1565]: 2025-11-01 00:44:41.669 [INFO][4715] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f59ce6e00a8c9808b794b01fd1ffb6e1c5a2779e073f0a61a83cb5d80e2850ea" Namespace="calico-system" Pod="goldmane-666569f655-bxfpm" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-goldmane--666569f655--bxfpm-eth0" Nov 1 00:44:41.677067 env[1565]: 2025-11-01 00:44:41.669 [INFO][4715] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f59ce6e00a8c9808b794b01fd1ffb6e1c5a2779e073f0a61a83cb5d80e2850ea" Namespace="calico-system" Pod="goldmane-666569f655-bxfpm" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-goldmane--666569f655--bxfpm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--3bc793b712-k8s-goldmane--666569f655--bxfpm-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"135646f8-0c66-45b5-80ce-9bb45c825de7", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 44, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-3bc793b712", ContainerID:"f59ce6e00a8c9808b794b01fd1ffb6e1c5a2779e073f0a61a83cb5d80e2850ea", Pod:"goldmane-666569f655-bxfpm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.3.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8050522b12a", MAC:"6a:97:af:10:76:40", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:44:41.677067 env[1565]: 2025-11-01 00:44:41.675 [INFO][4715] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f59ce6e00a8c9808b794b01fd1ffb6e1c5a2779e073f0a61a83cb5d80e2850ea" Namespace="calico-system" Pod="goldmane-666569f655-bxfpm" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-goldmane--666569f655--bxfpm-eth0" Nov 1 00:44:41.681209 env[1565]: time="2025-11-01T00:44:41.681177563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:44:41.681209 env[1565]: time="2025-11-01T00:44:41.681198429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:44:41.681209 env[1565]: time="2025-11-01T00:44:41.681205445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:44:41.681314 env[1565]: time="2025-11-01T00:44:41.681271646Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f59ce6e00a8c9808b794b01fd1ffb6e1c5a2779e073f0a61a83cb5d80e2850ea pid=4807 runtime=io.containerd.runc.v2 Nov 1 00:44:41.682000 audit[4817]: NETFILTER_CFG table=filter:110 family=2 entries=48 op=nft_register_chain pid=4817 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:44:41.682000 audit[4817]: SYSCALL arch=c000003e syscall=46 success=yes exit=26368 a0=3 a1=7fff2300afd0 a2=0 a3=7fff2300afbc items=0 ppid=4234 pid=4817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:41.682000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:44:41.686861 systemd[1]: Started cri-containerd-f59ce6e00a8c9808b794b01fd1ffb6e1c5a2779e073f0a61a83cb5d80e2850ea.scope. Nov 1 00:44:41.692000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.692000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.692000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.692000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.692000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.692000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.692000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.692000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.692000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.692000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.692000 audit: BPF prog-id=166 op=LOAD Nov 1 00:44:41.693000 audit[4818]: AVC avc: denied { bpf } for pid=4818 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.693000 audit[4818]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=4807 pid=4818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:41.693000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635396365366530306138633938303862373934623031666431666662 Nov 1 00:44:41.693000 audit[4818]: AVC avc: denied { perfmon } for pid=4818 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.693000 audit[4818]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=4807 pid=4818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:41.693000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635396365366530306138633938303862373934623031666431666662 Nov 1 00:44:41.693000 audit[4818]: AVC avc: denied { bpf } for pid=4818 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.693000 audit[4818]: AVC avc: denied { bpf } for pid=4818 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.693000 audit[4818]: AVC avc: denied { bpf } for pid=4818 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.693000 audit[4818]: AVC avc: denied { perfmon } for pid=4818 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.693000 audit[4818]: AVC avc: denied { perfmon } for pid=4818 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.693000 audit[4818]: AVC avc: denied { perfmon } for pid=4818 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.693000 audit[4818]: AVC avc: denied { perfmon } for pid=4818 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.693000 audit[4818]: AVC avc: denied { perfmon } for pid=4818 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.693000 audit[4818]: AVC avc: denied { bpf } for pid=4818 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.693000 audit[4818]: AVC avc: denied { bpf } for pid=4818 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.693000 audit: BPF prog-id=167 op=LOAD Nov 1 00:44:41.693000 audit[4818]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c000308d30 items=0 ppid=4807 pid=4818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:41.693000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635396365366530306138633938303862373934623031666431666662 Nov 1 00:44:41.693000 audit[4818]: AVC avc: denied { bpf } for pid=4818 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.693000 audit[4818]: AVC avc: denied { bpf } for pid=4818 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.693000 audit[4818]: AVC avc: denied { perfmon } for pid=4818 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.693000 audit[4818]: AVC avc: denied { perfmon } for pid=4818 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.693000 audit[4818]: AVC avc: denied { perfmon } for pid=4818 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.693000 audit[4818]: AVC avc: denied { perfmon } for pid=4818 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.693000 audit[4818]: AVC avc: denied { perfmon } for pid=4818 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.693000 audit[4818]: AVC avc: denied { bpf } for pid=4818 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.693000 audit[4818]: AVC avc: denied { bpf } for pid=4818 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.693000 audit: BPF prog-id=168 op=LOAD Nov 1 00:44:41.693000 audit[4818]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c000308d78 items=0 ppid=4807 pid=4818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:41.693000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635396365366530306138633938303862373934623031666431666662 Nov 1 00:44:41.693000 audit: BPF prog-id=168 op=UNLOAD Nov 1 00:44:41.693000 audit: BPF prog-id=167 op=UNLOAD Nov 1 00:44:41.693000 audit[4818]: AVC avc: denied { bpf } for pid=4818 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.693000 audit[4818]: AVC avc: denied { bpf } for pid=4818 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.693000 audit[4818]: AVC avc: denied { bpf } for pid=4818 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.693000 audit[4818]: AVC avc: denied { perfmon } for pid=4818 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.693000 audit[4818]: AVC avc: denied { perfmon } for pid=4818 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.693000 audit[4818]: AVC avc: denied { perfmon } for pid=4818 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.693000 audit[4818]: AVC avc: denied { perfmon } for pid=4818 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.693000 audit[4818]: AVC avc: denied { perfmon } for pid=4818 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.693000 audit[4818]: AVC avc: denied { bpf } for pid=4818 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.693000 audit[4818]: AVC avc: denied { bpf } for pid=4818 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.693000 audit: BPF prog-id=169 op=LOAD Nov 1 00:44:41.693000 audit[4818]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c000309188 items=0 ppid=4807 pid=4818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:41.693000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635396365366530306138633938303862373934623031666431666662 Nov 1 00:44:41.709450 env[1565]: time="2025-11-01T00:44:41.709426919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-bxfpm,Uid:135646f8-0c66-45b5-80ce-9bb45c825de7,Namespace:calico-system,Attempt:1,} returns sandbox id \"f59ce6e00a8c9808b794b01fd1ffb6e1c5a2779e073f0a61a83cb5d80e2850ea\"" Nov 1 00:44:41.749176 systemd-networkd[1324]: cali94fde579e17: Link UP Nov 1 00:44:41.782449 env[1565]: time="2025-11-01T00:44:41.782421598Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:41.782884 env[1565]: time="2025-11-01T00:44:41.782826658Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:44:41.783057 kubelet[2501]: E1101 00:44:41.782989 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:44:41.783057 kubelet[2501]: E1101 00:44:41.783030 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:44:41.783306 kubelet[2501]: E1101 00:44:41.783184 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p47qt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-859bf66984-8h8hn_calico-system(9cd09edd-44db-4b31-b369-b622badfedc3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:41.783430 env[1565]: time="2025-11-01T00:44:41.783373464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:44:41.784451 kubelet[2501]: E1101 00:44:41.784406 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-859bf66984-8h8hn" podUID="9cd09edd-44db-4b31-b369-b622badfedc3" Nov 1 00:44:41.802896 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 00:44:41.802934 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali94fde579e17: link becomes ready Nov 1 00:44:41.803103 systemd-networkd[1324]: cali94fde579e17: Gained carrier Nov 1 00:44:41.810092 env[1565]: 2025-11-01 00:44:41.608 [INFO][4731] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--3bc793b712-k8s-csi--node--driver--qpkjg-eth0 csi-node-driver- calico-system ce15fc81-d33c-45b3-b08a-5d312fb076f0 946 0 2025-11-01 00:44:18 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-3510.3.8-n-3bc793b712 csi-node-driver-qpkjg eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali94fde579e17 [] [] }} ContainerID="03421fe329cbcc570a6e7bba2e54f16b016021a58ba46685ffe77439f803662f" Namespace="calico-system" Pod="csi-node-driver-qpkjg" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-csi--node--driver--qpkjg-" Nov 1 00:44:41.810092 env[1565]: 2025-11-01 00:44:41.609 [INFO][4731] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="03421fe329cbcc570a6e7bba2e54f16b016021a58ba46685ffe77439f803662f" Namespace="calico-system" Pod="csi-node-driver-qpkjg" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-csi--node--driver--qpkjg-eth0" Nov 1 00:44:41.810092 env[1565]: 2025-11-01 00:44:41.620 [INFO][4765] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="03421fe329cbcc570a6e7bba2e54f16b016021a58ba46685ffe77439f803662f" HandleID="k8s-pod-network.03421fe329cbcc570a6e7bba2e54f16b016021a58ba46685ffe77439f803662f" Workload="ci--3510.3.8--n--3bc793b712-k8s-csi--node--driver--qpkjg-eth0" Nov 1 00:44:41.810092 env[1565]: 2025-11-01 00:44:41.620 [INFO][4765] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="03421fe329cbcc570a6e7bba2e54f16b016021a58ba46685ffe77439f803662f" HandleID="k8s-pod-network.03421fe329cbcc570a6e7bba2e54f16b016021a58ba46685ffe77439f803662f" Workload="ci--3510.3.8--n--3bc793b712-k8s-csi--node--driver--qpkjg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000137720), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-3bc793b712", "pod":"csi-node-driver-qpkjg", "timestamp":"2025-11-01 00:44:41.620591779 +0000 UTC"}, Hostname:"ci-3510.3.8-n-3bc793b712", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:44:41.810092 env[1565]: 2025-11-01 00:44:41.620 [INFO][4765] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:41.810092 env[1565]: 2025-11-01 00:44:41.639 [INFO][4765] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:41.810092 env[1565]: 2025-11-01 00:44:41.639 [INFO][4765] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-3bc793b712' Nov 1 00:44:41.810092 env[1565]: 2025-11-01 00:44:41.728 [INFO][4765] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.03421fe329cbcc570a6e7bba2e54f16b016021a58ba46685ffe77439f803662f" host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:41.810092 env[1565]: 2025-11-01 00:44:41.731 [INFO][4765] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:41.810092 env[1565]: 2025-11-01 00:44:41.734 [INFO][4765] ipam/ipam.go 511: Trying affinity for 192.168.3.64/26 host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:41.810092 env[1565]: 2025-11-01 00:44:41.736 [INFO][4765] ipam/ipam.go 158: Attempting to load block cidr=192.168.3.64/26 host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:41.810092 env[1565]: 2025-11-01 00:44:41.737 [INFO][4765] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.3.64/26 host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:41.810092 env[1565]: 2025-11-01 00:44:41.737 [INFO][4765] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.3.64/26 handle="k8s-pod-network.03421fe329cbcc570a6e7bba2e54f16b016021a58ba46685ffe77439f803662f" host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:41.810092 env[1565]: 2025-11-01 00:44:41.739 [INFO][4765] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.03421fe329cbcc570a6e7bba2e54f16b016021a58ba46685ffe77439f803662f Nov 1 00:44:41.810092 env[1565]: 2025-11-01 00:44:41.742 [INFO][4765] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.3.64/26 handle="k8s-pod-network.03421fe329cbcc570a6e7bba2e54f16b016021a58ba46685ffe77439f803662f" host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:41.810092 env[1565]: 2025-11-01 00:44:41.746 [INFO][4765] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.3.68/26] block=192.168.3.64/26 handle="k8s-pod-network.03421fe329cbcc570a6e7bba2e54f16b016021a58ba46685ffe77439f803662f" host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:41.810092 env[1565]: 2025-11-01 00:44:41.746 [INFO][4765] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.3.68/26] handle="k8s-pod-network.03421fe329cbcc570a6e7bba2e54f16b016021a58ba46685ffe77439f803662f" host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:41.810092 env[1565]: 2025-11-01 00:44:41.746 [INFO][4765] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:41.810092 env[1565]: 2025-11-01 00:44:41.746 [INFO][4765] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.3.68/26] IPv6=[] ContainerID="03421fe329cbcc570a6e7bba2e54f16b016021a58ba46685ffe77439f803662f" HandleID="k8s-pod-network.03421fe329cbcc570a6e7bba2e54f16b016021a58ba46685ffe77439f803662f" Workload="ci--3510.3.8--n--3bc793b712-k8s-csi--node--driver--qpkjg-eth0" Nov 1 00:44:41.810543 env[1565]: 2025-11-01 00:44:41.747 [INFO][4731] cni-plugin/k8s.go 418: Populated endpoint ContainerID="03421fe329cbcc570a6e7bba2e54f16b016021a58ba46685ffe77439f803662f" Namespace="calico-system" Pod="csi-node-driver-qpkjg" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-csi--node--driver--qpkjg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--3bc793b712-k8s-csi--node--driver--qpkjg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ce15fc81-d33c-45b3-b08a-5d312fb076f0", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 44, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-3bc793b712", ContainerID:"", Pod:"csi-node-driver-qpkjg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.3.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali94fde579e17", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:44:41.810543 env[1565]: 2025-11-01 00:44:41.748 [INFO][4731] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.3.68/32] ContainerID="03421fe329cbcc570a6e7bba2e54f16b016021a58ba46685ffe77439f803662f" Namespace="calico-system" Pod="csi-node-driver-qpkjg" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-csi--node--driver--qpkjg-eth0" Nov 1 00:44:41.810543 env[1565]: 2025-11-01 00:44:41.748 [INFO][4731] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali94fde579e17 ContainerID="03421fe329cbcc570a6e7bba2e54f16b016021a58ba46685ffe77439f803662f" Namespace="calico-system" Pod="csi-node-driver-qpkjg" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-csi--node--driver--qpkjg-eth0" Nov 1 00:44:41.810543 env[1565]: 2025-11-01 00:44:41.803 [INFO][4731] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="03421fe329cbcc570a6e7bba2e54f16b016021a58ba46685ffe77439f803662f" Namespace="calico-system" Pod="csi-node-driver-qpkjg" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-csi--node--driver--qpkjg-eth0" Nov 1 00:44:41.810543 env[1565]: 2025-11-01 00:44:41.803 [INFO][4731] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="03421fe329cbcc570a6e7bba2e54f16b016021a58ba46685ffe77439f803662f" Namespace="calico-system" Pod="csi-node-driver-qpkjg" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-csi--node--driver--qpkjg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--3bc793b712-k8s-csi--node--driver--qpkjg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ce15fc81-d33c-45b3-b08a-5d312fb076f0", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 44, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-3bc793b712", ContainerID:"03421fe329cbcc570a6e7bba2e54f16b016021a58ba46685ffe77439f803662f", Pod:"csi-node-driver-qpkjg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.3.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali94fde579e17", MAC:"2e:e4:e2:a9:a9:7e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:44:41.810543 env[1565]: 2025-11-01 00:44:41.808 [INFO][4731] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="03421fe329cbcc570a6e7bba2e54f16b016021a58ba46685ffe77439f803662f" Namespace="calico-system" Pod="csi-node-driver-qpkjg" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-csi--node--driver--qpkjg-eth0" Nov 1 00:44:41.814340 env[1565]: time="2025-11-01T00:44:41.814277439Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:44:41.814340 env[1565]: time="2025-11-01T00:44:41.814300163Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:44:41.814340 env[1565]: time="2025-11-01T00:44:41.814310815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:44:41.814450 env[1565]: time="2025-11-01T00:44:41.814385665Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/03421fe329cbcc570a6e7bba2e54f16b016021a58ba46685ffe77439f803662f pid=4857 runtime=io.containerd.runc.v2 Nov 1 00:44:41.815000 audit[4864]: NETFILTER_CFG table=filter:111 family=2 entries=44 op=nft_register_chain pid=4864 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:44:41.815000 audit[4864]: SYSCALL arch=c000003e syscall=46 success=yes exit=21952 a0=3 a1=7ffde57ce2e0 a2=0 a3=7ffde57ce2cc items=0 ppid=4234 pid=4864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:41.815000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:44:41.819555 systemd[1]: Started cri-containerd-03421fe329cbcc570a6e7bba2e54f16b016021a58ba46685ffe77439f803662f.scope. Nov 1 00:44:41.825000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.825000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.825000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.825000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.825000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.825000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.825000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.825000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.825000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.825000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.825000 audit: BPF prog-id=170 op=LOAD Nov 1 00:44:41.825000 audit[4867]: AVC avc: denied { bpf } for pid=4867 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.825000 audit[4867]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=4857 pid=4867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:41.825000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033343231666533323963626363353730613665376262613265353466 Nov 1 00:44:41.825000 audit[4867]: AVC avc: denied { perfmon } for pid=4867 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.825000 audit[4867]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=4857 pid=4867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:41.825000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033343231666533323963626363353730613665376262613265353466 Nov 1 00:44:41.825000 audit[4867]: AVC avc: denied { bpf } for pid=4867 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.825000 audit[4867]: AVC avc: denied { bpf } for pid=4867 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.825000 audit[4867]: AVC avc: denied { bpf } for pid=4867 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.825000 audit[4867]: AVC avc: denied { perfmon } for pid=4867 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.825000 audit[4867]: AVC avc: denied { perfmon } for pid=4867 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.825000 audit[4867]: AVC avc: denied { perfmon } for pid=4867 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.825000 audit[4867]: AVC avc: denied { perfmon } for pid=4867 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.825000 audit[4867]: AVC avc: denied { perfmon } for pid=4867 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.825000 audit[4867]: AVC avc: denied { bpf } for pid=4867 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.825000 audit[4867]: AVC avc: denied { bpf } for pid=4867 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.825000 audit: BPF prog-id=171 op=LOAD Nov 1 00:44:41.825000 audit[4867]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c0002127d0 items=0 ppid=4857 pid=4867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:41.825000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033343231666533323963626363353730613665376262613265353466 Nov 1 00:44:41.825000 audit[4867]: AVC avc: denied { bpf } for pid=4867 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.825000 audit[4867]: AVC avc: denied { bpf } for pid=4867 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.825000 audit[4867]: AVC avc: denied { perfmon } for pid=4867 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.825000 audit[4867]: AVC avc: denied { perfmon } for pid=4867 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.825000 audit[4867]: AVC avc: denied { perfmon } for pid=4867 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.825000 audit[4867]: AVC avc: denied { perfmon } for pid=4867 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.825000 audit[4867]: AVC avc: denied { perfmon } for pid=4867 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.825000 audit[4867]: AVC avc: denied { bpf } for pid=4867 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.825000 audit[4867]: AVC avc: denied { bpf } for pid=4867 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.825000 audit: BPF prog-id=172 op=LOAD Nov 1 00:44:41.825000 audit[4867]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c000212818 items=0 ppid=4857 pid=4867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:41.825000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033343231666533323963626363353730613665376262613265353466 Nov 1 00:44:41.825000 audit: BPF prog-id=172 op=UNLOAD Nov 1 00:44:41.825000 audit: BPF prog-id=171 op=UNLOAD Nov 1 00:44:41.825000 audit[4867]: AVC avc: denied { bpf } for pid=4867 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.825000 audit[4867]: AVC avc: denied { bpf } for pid=4867 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.825000 audit[4867]: AVC avc: denied { bpf } for pid=4867 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.825000 audit[4867]: AVC avc: denied { perfmon } for pid=4867 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.825000 audit[4867]: AVC avc: denied { perfmon } for pid=4867 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.825000 audit[4867]: AVC avc: denied { perfmon } for pid=4867 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.825000 audit[4867]: AVC avc: denied { perfmon } for pid=4867 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.825000 audit[4867]: AVC avc: denied { perfmon } for pid=4867 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.825000 audit[4867]: AVC avc: denied { bpf } for pid=4867 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.825000 audit[4867]: AVC avc: denied { bpf } for pid=4867 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:41.825000 audit: BPF prog-id=173 op=LOAD Nov 1 00:44:41.825000 audit[4867]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c000212c28 items=0 ppid=4857 pid=4867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:41.825000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033343231666533323963626363353730613665376262613265353466 Nov 1 00:44:41.830824 env[1565]: time="2025-11-01T00:44:41.830800432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qpkjg,Uid:ce15fc81-d33c-45b3-b08a-5d312fb076f0,Namespace:calico-system,Attempt:1,} returns sandbox id \"03421fe329cbcc570a6e7bba2e54f16b016021a58ba46685ffe77439f803662f\"" Nov 1 00:44:42.148771 env[1565]: time="2025-11-01T00:44:42.148678209Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:42.149154 env[1565]: time="2025-11-01T00:44:42.149128805Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:44:42.149303 kubelet[2501]: E1101 00:44:42.149249 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:44:42.149303 kubelet[2501]: E1101 00:44:42.149280 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:44:42.149452 kubelet[2501]: E1101 00:44:42.149427 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vfh6h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-bxfpm_calico-system(135646f8-0c66-45b5-80ce-9bb45c825de7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:42.149545 env[1565]: time="2025-11-01T00:44:42.149500929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:44:42.150541 kubelet[2501]: E1101 00:44:42.150521 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxfpm" podUID="135646f8-0c66-45b5-80ce-9bb45c825de7" Nov 1 00:44:42.513252 env[1565]: time="2025-11-01T00:44:42.513099183Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:42.514361 env[1565]: time="2025-11-01T00:44:42.514223805Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:44:42.514756 kubelet[2501]: E1101 00:44:42.514686 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:44:42.514962 kubelet[2501]: E1101 00:44:42.514775 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:44:42.515109 kubelet[2501]: E1101 00:44:42.515005 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l4wc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qpkjg_calico-system(ce15fc81-d33c-45b3-b08a-5d312fb076f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:42.517884 env[1565]: time="2025-11-01T00:44:42.517784359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:44:42.622642 systemd-networkd[1324]: calid098c8d73a2: Gained IPv6LL Nov 1 00:44:42.664774 kubelet[2501]: E1101 00:44:42.664691 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-859bf66984-8h8hn" podUID="9cd09edd-44db-4b31-b369-b622badfedc3" Nov 1 00:44:42.665160 kubelet[2501]: E1101 00:44:42.664826 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxfpm" podUID="135646f8-0c66-45b5-80ce-9bb45c825de7" Nov 1 00:44:42.724000 audit[4893]: NETFILTER_CFG table=filter:112 family=2 entries=20 op=nft_register_rule pid=4893 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:42.724000 audit[4893]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc5f8ed700 a2=0 a3=7ffc5f8ed6ec items=0 ppid=2683 pid=4893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:42.724000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:42.735000 audit[4893]: NETFILTER_CFG table=nat:113 family=2 entries=14 op=nft_register_rule pid=4893 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:42.735000 audit[4893]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc5f8ed700 a2=0 a3=0 items=0 ppid=2683 pid=4893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:42.735000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:42.909252 env[1565]: time="2025-11-01T00:44:42.908983096Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:42.910046 env[1565]: time="2025-11-01T00:44:42.909901127Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:44:42.910585 kubelet[2501]: E1101 00:44:42.910412 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:44:42.910585 kubelet[2501]: E1101 00:44:42.910568 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:44:42.911457 kubelet[2501]: E1101 00:44:42.910890 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l4wc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qpkjg_calico-system(ce15fc81-d33c-45b3-b08a-5d312fb076f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:42.912451 kubelet[2501]: E1101 00:44:42.912318 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpkjg" podUID="ce15fc81-d33c-45b3-b08a-5d312fb076f0" Nov 1 00:44:43.530947 env[1565]: time="2025-11-01T00:44:43.530814236Z" level=info msg="StopPodSandbox for \"66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd\"" Nov 1 00:44:43.531231 env[1565]: time="2025-11-01T00:44:43.531020075Z" level=info msg="StopPodSandbox for \"de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78\"" Nov 1 00:44:43.583660 systemd-networkd[1324]: cali8050522b12a: Gained IPv6LL Nov 1 00:44:43.591927 env[1565]: 2025-11-01 00:44:43.574 [INFO][4916] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd" Nov 1 00:44:43.591927 env[1565]: 2025-11-01 00:44:43.574 [INFO][4916] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd" iface="eth0" netns="/var/run/netns/cni-e2dd3e84-c81a-209a-d236-760a0255aa7b" Nov 1 00:44:43.591927 env[1565]: 2025-11-01 00:44:43.575 [INFO][4916] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd" iface="eth0" netns="/var/run/netns/cni-e2dd3e84-c81a-209a-d236-760a0255aa7b" Nov 1 00:44:43.591927 env[1565]: 2025-11-01 00:44:43.575 [INFO][4916] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd" iface="eth0" netns="/var/run/netns/cni-e2dd3e84-c81a-209a-d236-760a0255aa7b" Nov 1 00:44:43.591927 env[1565]: 2025-11-01 00:44:43.575 [INFO][4916] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd" Nov 1 00:44:43.591927 env[1565]: 2025-11-01 00:44:43.575 [INFO][4916] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd" Nov 1 00:44:43.591927 env[1565]: 2025-11-01 00:44:43.584 [INFO][4953] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd" HandleID="k8s-pod-network.66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd" Workload="ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--vws84-eth0" Nov 1 00:44:43.591927 env[1565]: 2025-11-01 00:44:43.584 [INFO][4953] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:43.591927 env[1565]: 2025-11-01 00:44:43.584 [INFO][4953] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:43.591927 env[1565]: 2025-11-01 00:44:43.589 [WARNING][4953] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd" HandleID="k8s-pod-network.66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd" Workload="ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--vws84-eth0" Nov 1 00:44:43.591927 env[1565]: 2025-11-01 00:44:43.589 [INFO][4953] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd" HandleID="k8s-pod-network.66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd" Workload="ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--vws84-eth0" Nov 1 00:44:43.591927 env[1565]: 2025-11-01 00:44:43.590 [INFO][4953] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:43.591927 env[1565]: 2025-11-01 00:44:43.591 [INFO][4916] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd" Nov 1 00:44:43.592317 env[1565]: time="2025-11-01T00:44:43.592001179Z" level=info msg="TearDown network for sandbox \"66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd\" successfully" Nov 1 00:44:43.592317 env[1565]: time="2025-11-01T00:44:43.592020252Z" level=info msg="StopPodSandbox for \"66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd\" returns successfully" Nov 1 00:44:43.592470 env[1565]: time="2025-11-01T00:44:43.592457276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d69b6c6c-vws84,Uid:af4cf953-58b5-4727-a1f5-dcd340748032,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:44:43.593807 systemd[1]: run-netns-cni\x2de2dd3e84\x2dc81a\x2d209a\x2dd236\x2d760a0255aa7b.mount: Deactivated successfully. Nov 1 00:44:43.597810 env[1565]: 2025-11-01 00:44:43.574 [INFO][4917] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78" Nov 1 00:44:43.597810 env[1565]: 2025-11-01 00:44:43.574 [INFO][4917] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78" iface="eth0" netns="/var/run/netns/cni-0eed8a18-ddcc-5e28-649f-41fbe89edf22" Nov 1 00:44:43.597810 env[1565]: 2025-11-01 00:44:43.574 [INFO][4917] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78" iface="eth0" netns="/var/run/netns/cni-0eed8a18-ddcc-5e28-649f-41fbe89edf22" Nov 1 00:44:43.597810 env[1565]: 2025-11-01 00:44:43.574 [INFO][4917] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78" iface="eth0" netns="/var/run/netns/cni-0eed8a18-ddcc-5e28-649f-41fbe89edf22" Nov 1 00:44:43.597810 env[1565]: 2025-11-01 00:44:43.574 [INFO][4917] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78" Nov 1 00:44:43.597810 env[1565]: 2025-11-01 00:44:43.574 [INFO][4917] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78" Nov 1 00:44:43.597810 env[1565]: 2025-11-01 00:44:43.584 [INFO][4951] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78" HandleID="k8s-pod-network.de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78" Workload="ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--86gd2-eth0" Nov 1 00:44:43.597810 env[1565]: 2025-11-01 00:44:43.584 [INFO][4951] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:43.597810 env[1565]: 2025-11-01 00:44:43.590 [INFO][4951] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:43.597810 env[1565]: 2025-11-01 00:44:43.594 [WARNING][4951] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78" HandleID="k8s-pod-network.de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78" Workload="ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--86gd2-eth0" Nov 1 00:44:43.597810 env[1565]: 2025-11-01 00:44:43.594 [INFO][4951] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78" HandleID="k8s-pod-network.de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78" Workload="ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--86gd2-eth0" Nov 1 00:44:43.597810 env[1565]: 2025-11-01 00:44:43.595 [INFO][4951] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:43.597810 env[1565]: 2025-11-01 00:44:43.596 [INFO][4917] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78" Nov 1 00:44:43.598261 env[1565]: time="2025-11-01T00:44:43.597877393Z" level=info msg="TearDown network for sandbox \"de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78\" successfully" Nov 1 00:44:43.598261 env[1565]: time="2025-11-01T00:44:43.597895513Z" level=info msg="StopPodSandbox for \"de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78\" returns successfully" Nov 1 00:44:43.598323 env[1565]: time="2025-11-01T00:44:43.598255655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-86gd2,Uid:2eed7e34-9d6a-4f7a-a712-82a7a599696d,Namespace:kube-system,Attempt:1,}" Nov 1 00:44:43.600560 systemd[1]: run-netns-cni\x2d0eed8a18\x2dddcc\x2d5e28\x2d649f\x2d41fbe89edf22.mount: Deactivated successfully. Nov 1 00:44:43.650965 systemd-networkd[1324]: cali3e033247e8b: Link UP Nov 1 00:44:43.677982 kubelet[2501]: E1101 00:44:43.677922 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxfpm" podUID="135646f8-0c66-45b5-80ce-9bb45c825de7" Nov 1 00:44:43.678089 kubelet[2501]: E1101 00:44:43.678061 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpkjg" podUID="ce15fc81-d33c-45b3-b08a-5d312fb076f0" Nov 1 00:44:43.703352 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 00:44:43.703394 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali3e033247e8b: link becomes ready Nov 1 00:44:43.703645 systemd-networkd[1324]: cali3e033247e8b: Gained carrier Nov 1 00:44:43.712280 env[1565]: 2025-11-01 00:44:43.613 [INFO][4984] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--vws84-eth0 calico-apiserver-5d69b6c6c- calico-apiserver af4cf953-58b5-4727-a1f5-dcd340748032 981 0 2025-11-01 00:44:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d69b6c6c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.8-n-3bc793b712 calico-apiserver-5d69b6c6c-vws84 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3e033247e8b [] [] }} ContainerID="80090ac8eb677f7d08cd203f8dfcfafc867b629ab5a3827e7e17db8415cae347" Namespace="calico-apiserver" Pod="calico-apiserver-5d69b6c6c-vws84" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--vws84-" Nov 1 00:44:43.712280 env[1565]: 2025-11-01 00:44:43.614 [INFO][4984] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="80090ac8eb677f7d08cd203f8dfcfafc867b629ab5a3827e7e17db8415cae347" Namespace="calico-apiserver" Pod="calico-apiserver-5d69b6c6c-vws84" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--vws84-eth0" Nov 1 00:44:43.712280 env[1565]: 2025-11-01 00:44:43.627 [INFO][5030] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="80090ac8eb677f7d08cd203f8dfcfafc867b629ab5a3827e7e17db8415cae347" HandleID="k8s-pod-network.80090ac8eb677f7d08cd203f8dfcfafc867b629ab5a3827e7e17db8415cae347" Workload="ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--vws84-eth0" Nov 1 00:44:43.712280 env[1565]: 2025-11-01 00:44:43.627 [INFO][5030] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="80090ac8eb677f7d08cd203f8dfcfafc867b629ab5a3827e7e17db8415cae347" HandleID="k8s-pod-network.80090ac8eb677f7d08cd203f8dfcfafc867b629ab5a3827e7e17db8415cae347" Workload="ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--vws84-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e9750), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.8-n-3bc793b712", "pod":"calico-apiserver-5d69b6c6c-vws84", "timestamp":"2025-11-01 00:44:43.627137232 +0000 UTC"}, Hostname:"ci-3510.3.8-n-3bc793b712", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:44:43.712280 env[1565]: 2025-11-01 00:44:43.627 [INFO][5030] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:43.712280 env[1565]: 2025-11-01 00:44:43.627 [INFO][5030] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:43.712280 env[1565]: 2025-11-01 00:44:43.627 [INFO][5030] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-3bc793b712' Nov 1 00:44:43.712280 env[1565]: 2025-11-01 00:44:43.632 [INFO][5030] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.80090ac8eb677f7d08cd203f8dfcfafc867b629ab5a3827e7e17db8415cae347" host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:43.712280 env[1565]: 2025-11-01 00:44:43.636 [INFO][5030] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:43.712280 env[1565]: 2025-11-01 00:44:43.639 [INFO][5030] ipam/ipam.go 511: Trying affinity for 192.168.3.64/26 host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:43.712280 env[1565]: 2025-11-01 00:44:43.640 [INFO][5030] ipam/ipam.go 158: Attempting to load block cidr=192.168.3.64/26 host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:43.712280 env[1565]: 2025-11-01 00:44:43.642 [INFO][5030] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.3.64/26 host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:43.712280 env[1565]: 2025-11-01 00:44:43.642 [INFO][5030] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.3.64/26 handle="k8s-pod-network.80090ac8eb677f7d08cd203f8dfcfafc867b629ab5a3827e7e17db8415cae347" host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:43.712280 env[1565]: 2025-11-01 00:44:43.643 [INFO][5030] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.80090ac8eb677f7d08cd203f8dfcfafc867b629ab5a3827e7e17db8415cae347 Nov 1 00:44:43.712280 env[1565]: 2025-11-01 00:44:43.645 [INFO][5030] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.3.64/26 handle="k8s-pod-network.80090ac8eb677f7d08cd203f8dfcfafc867b629ab5a3827e7e17db8415cae347" host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:43.712280 env[1565]: 2025-11-01 00:44:43.649 [INFO][5030] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.3.69/26] block=192.168.3.64/26 handle="k8s-pod-network.80090ac8eb677f7d08cd203f8dfcfafc867b629ab5a3827e7e17db8415cae347" host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:43.712280 env[1565]: 2025-11-01 00:44:43.649 [INFO][5030] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.3.69/26] handle="k8s-pod-network.80090ac8eb677f7d08cd203f8dfcfafc867b629ab5a3827e7e17db8415cae347" host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:43.712280 env[1565]: 2025-11-01 00:44:43.649 [INFO][5030] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:43.712280 env[1565]: 2025-11-01 00:44:43.649 [INFO][5030] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.3.69/26] IPv6=[] ContainerID="80090ac8eb677f7d08cd203f8dfcfafc867b629ab5a3827e7e17db8415cae347" HandleID="k8s-pod-network.80090ac8eb677f7d08cd203f8dfcfafc867b629ab5a3827e7e17db8415cae347" Workload="ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--vws84-eth0" Nov 1 00:44:43.712736 env[1565]: 2025-11-01 00:44:43.650 [INFO][4984] cni-plugin/k8s.go 418: Populated endpoint ContainerID="80090ac8eb677f7d08cd203f8dfcfafc867b629ab5a3827e7e17db8415cae347" Namespace="calico-apiserver" Pod="calico-apiserver-5d69b6c6c-vws84" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--vws84-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--vws84-eth0", GenerateName:"calico-apiserver-5d69b6c6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"af4cf953-58b5-4727-a1f5-dcd340748032", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 44, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d69b6c6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-3bc793b712", ContainerID:"", Pod:"calico-apiserver-5d69b6c6c-vws84", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3e033247e8b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:44:43.712736 env[1565]: 2025-11-01 00:44:43.650 [INFO][4984] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.3.69/32] ContainerID="80090ac8eb677f7d08cd203f8dfcfafc867b629ab5a3827e7e17db8415cae347" Namespace="calico-apiserver" Pod="calico-apiserver-5d69b6c6c-vws84" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--vws84-eth0" Nov 1 00:44:43.712736 env[1565]: 2025-11-01 00:44:43.650 [INFO][4984] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3e033247e8b ContainerID="80090ac8eb677f7d08cd203f8dfcfafc867b629ab5a3827e7e17db8415cae347" Namespace="calico-apiserver" Pod="calico-apiserver-5d69b6c6c-vws84" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--vws84-eth0" Nov 1 00:44:43.712736 env[1565]: 2025-11-01 00:44:43.703 [INFO][4984] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="80090ac8eb677f7d08cd203f8dfcfafc867b629ab5a3827e7e17db8415cae347" Namespace="calico-apiserver" Pod="calico-apiserver-5d69b6c6c-vws84" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--vws84-eth0" Nov 1 00:44:43.712736 env[1565]: 2025-11-01 00:44:43.704 [INFO][4984] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="80090ac8eb677f7d08cd203f8dfcfafc867b629ab5a3827e7e17db8415cae347" Namespace="calico-apiserver" Pod="calico-apiserver-5d69b6c6c-vws84" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--vws84-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--vws84-eth0", GenerateName:"calico-apiserver-5d69b6c6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"af4cf953-58b5-4727-a1f5-dcd340748032", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 44, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d69b6c6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-3bc793b712", ContainerID:"80090ac8eb677f7d08cd203f8dfcfafc867b629ab5a3827e7e17db8415cae347", Pod:"calico-apiserver-5d69b6c6c-vws84", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3e033247e8b", MAC:"82:21:db:ce:e0:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:44:43.712736 env[1565]: 2025-11-01 00:44:43.711 [INFO][4984] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="80090ac8eb677f7d08cd203f8dfcfafc867b629ab5a3827e7e17db8415cae347" Namespace="calico-apiserver" Pod="calico-apiserver-5d69b6c6c-vws84" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--vws84-eth0" Nov 1 00:44:43.716697 env[1565]: time="2025-11-01T00:44:43.716660225Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:44:43.716697 env[1565]: time="2025-11-01T00:44:43.716681580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:44:43.716697 env[1565]: time="2025-11-01T00:44:43.716688543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:44:43.716808 env[1565]: time="2025-11-01T00:44:43.716752382Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/80090ac8eb677f7d08cd203f8dfcfafc867b629ab5a3827e7e17db8415cae347 pid=5080 runtime=io.containerd.runc.v2 Nov 1 00:44:43.718000 audit[5089]: NETFILTER_CFG table=filter:114 family=2 entries=62 op=nft_register_chain pid=5089 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:44:43.718000 audit[5089]: SYSCALL arch=c000003e syscall=46 success=yes exit=31772 a0=3 a1=7ffca66f1090 a2=0 a3=7ffca66f107c items=0 ppid=4234 pid=5089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:43.718000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:44:43.722573 systemd[1]: Started cri-containerd-80090ac8eb677f7d08cd203f8dfcfafc867b629ab5a3827e7e17db8415cae347.scope. Nov 1 00:44:43.729000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.729000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.729000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.729000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.729000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.729000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.729000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.729000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.729000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.729000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.729000 audit: BPF prog-id=174 op=LOAD Nov 1 00:44:43.729000 audit[5090]: AVC avc: denied { bpf } for pid=5090 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.729000 audit[5090]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000149c48 a2=10 a3=1c items=0 ppid=5080 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:43.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830303930616338656236373766376430386364323033663864666366 Nov 1 00:44:43.729000 audit[5090]: AVC avc: denied { perfmon } for pid=5090 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.729000 audit[5090]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001496b0 a2=3c a3=c items=0 ppid=5080 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:43.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830303930616338656236373766376430386364323033663864666366 Nov 1 00:44:43.729000 audit[5090]: AVC avc: denied { bpf } for pid=5090 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.729000 audit[5090]: AVC avc: denied { bpf } for pid=5090 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.729000 audit[5090]: AVC avc: denied { bpf } for pid=5090 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.729000 audit[5090]: AVC avc: denied { perfmon } for pid=5090 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.729000 audit[5090]: AVC avc: denied { perfmon } for pid=5090 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.729000 audit[5090]: AVC avc: denied { perfmon } for pid=5090 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.729000 audit[5090]: AVC avc: denied { perfmon } for pid=5090 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.729000 audit[5090]: AVC avc: denied { perfmon } for pid=5090 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.729000 audit[5090]: AVC avc: denied { bpf } for pid=5090 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.729000 audit[5090]: AVC avc: denied { bpf } for pid=5090 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.729000 audit: BPF prog-id=175 op=LOAD Nov 1 00:44:43.729000 audit[5090]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001499d8 a2=78 a3=c000279d90 items=0 ppid=5080 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:43.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830303930616338656236373766376430386364323033663864666366 Nov 1 00:44:43.729000 audit[5090]: AVC avc: denied { bpf } for pid=5090 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.729000 audit[5090]: AVC avc: denied { bpf } for pid=5090 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.729000 audit[5090]: AVC avc: denied { perfmon } for pid=5090 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.729000 audit[5090]: AVC avc: denied { perfmon } for pid=5090 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.729000 audit[5090]: AVC avc: denied { perfmon } for pid=5090 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.729000 audit[5090]: AVC avc: denied { perfmon } for pid=5090 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.729000 audit[5090]: AVC avc: denied { perfmon } for pid=5090 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.729000 audit[5090]: AVC avc: denied { bpf } for pid=5090 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.729000 audit[5090]: AVC avc: denied { bpf } for pid=5090 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.729000 audit: BPF prog-id=176 op=LOAD Nov 1 00:44:43.729000 audit[5090]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000149770 a2=78 a3=c000279dd8 items=0 ppid=5080 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:43.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830303930616338656236373766376430386364323033663864666366 Nov 1 00:44:43.729000 audit: BPF prog-id=176 op=UNLOAD Nov 1 00:44:43.729000 audit: BPF prog-id=175 op=UNLOAD Nov 1 00:44:43.729000 audit[5090]: AVC avc: denied { bpf } for pid=5090 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.729000 audit[5090]: AVC avc: denied { bpf } for pid=5090 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.729000 audit[5090]: AVC avc: denied { bpf } for pid=5090 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.729000 audit[5090]: AVC avc: denied { perfmon } for pid=5090 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.729000 audit[5090]: AVC avc: denied { perfmon } for pid=5090 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.729000 audit[5090]: AVC avc: denied { perfmon } for pid=5090 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.729000 audit[5090]: AVC avc: denied { perfmon } for pid=5090 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.729000 audit[5090]: AVC avc: denied { perfmon } for pid=5090 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.729000 audit[5090]: AVC avc: denied { bpf } for pid=5090 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.729000 audit[5090]: AVC avc: denied { bpf } for pid=5090 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.729000 audit: BPF prog-id=177 op=LOAD Nov 1 00:44:43.729000 audit[5090]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000149c30 a2=78 a3=c0003f01e8 items=0 ppid=5080 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:43.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830303930616338656236373766376430386364323033663864666366 Nov 1 00:44:43.746832 env[1565]: time="2025-11-01T00:44:43.746804490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d69b6c6c-vws84,Uid:af4cf953-58b5-4727-a1f5-dcd340748032,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"80090ac8eb677f7d08cd203f8dfcfafc867b629ab5a3827e7e17db8415cae347\"" Nov 1 00:44:43.747491 env[1565]: time="2025-11-01T00:44:43.747478870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:44:43.756104 systemd-networkd[1324]: calia9db2ac3380: Link UP Nov 1 00:44:43.782525 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calia9db2ac3380: link becomes ready Nov 1 00:44:43.782488 systemd-networkd[1324]: calia9db2ac3380: Gained carrier Nov 1 00:44:43.782698 systemd-networkd[1324]: cali94fde579e17: Gained IPv6LL Nov 1 00:44:43.788907 env[1565]: 2025-11-01 00:44:43.618 [INFO][5004] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--86gd2-eth0 coredns-668d6bf9bc- kube-system 2eed7e34-9d6a-4f7a-a712-82a7a599696d 980 0 2025-11-01 00:44:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.8-n-3bc793b712 coredns-668d6bf9bc-86gd2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia9db2ac3380 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b3f03cd150e69fcb73547d36c314a7c005b5233fa812b88df30d275ccc7dc855" Namespace="kube-system" Pod="coredns-668d6bf9bc-86gd2" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--86gd2-" Nov 1 00:44:43.788907 env[1565]: 2025-11-01 00:44:43.618 [INFO][5004] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b3f03cd150e69fcb73547d36c314a7c005b5233fa812b88df30d275ccc7dc855" Namespace="kube-system" Pod="coredns-668d6bf9bc-86gd2" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--86gd2-eth0" Nov 1 00:44:43.788907 env[1565]: 2025-11-01 00:44:43.630 [INFO][5039] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b3f03cd150e69fcb73547d36c314a7c005b5233fa812b88df30d275ccc7dc855" HandleID="k8s-pod-network.b3f03cd150e69fcb73547d36c314a7c005b5233fa812b88df30d275ccc7dc855" Workload="ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--86gd2-eth0" Nov 1 00:44:43.788907 env[1565]: 2025-11-01 00:44:43.630 [INFO][5039] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b3f03cd150e69fcb73547d36c314a7c005b5233fa812b88df30d275ccc7dc855" HandleID="k8s-pod-network.b3f03cd150e69fcb73547d36c314a7c005b5233fa812b88df30d275ccc7dc855" Workload="ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--86gd2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005a0ae0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.8-n-3bc793b712", "pod":"coredns-668d6bf9bc-86gd2", "timestamp":"2025-11-01 00:44:43.630563144 +0000 UTC"}, Hostname:"ci-3510.3.8-n-3bc793b712", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:44:43.788907 env[1565]: 2025-11-01 00:44:43.630 [INFO][5039] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:43.788907 env[1565]: 2025-11-01 00:44:43.649 [INFO][5039] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:43.788907 env[1565]: 2025-11-01 00:44:43.649 [INFO][5039] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-3bc793b712' Nov 1 00:44:43.788907 env[1565]: 2025-11-01 00:44:43.732 [INFO][5039] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b3f03cd150e69fcb73547d36c314a7c005b5233fa812b88df30d275ccc7dc855" host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:43.788907 env[1565]: 2025-11-01 00:44:43.736 [INFO][5039] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:43.788907 env[1565]: 2025-11-01 00:44:43.740 [INFO][5039] ipam/ipam.go 511: Trying affinity for 192.168.3.64/26 host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:43.788907 env[1565]: 2025-11-01 00:44:43.741 [INFO][5039] ipam/ipam.go 158: Attempting to load block cidr=192.168.3.64/26 host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:43.788907 env[1565]: 2025-11-01 00:44:43.743 [INFO][5039] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.3.64/26 host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:43.788907 env[1565]: 2025-11-01 00:44:43.743 [INFO][5039] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.3.64/26 handle="k8s-pod-network.b3f03cd150e69fcb73547d36c314a7c005b5233fa812b88df30d275ccc7dc855" host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:43.788907 env[1565]: 2025-11-01 00:44:43.744 [INFO][5039] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b3f03cd150e69fcb73547d36c314a7c005b5233fa812b88df30d275ccc7dc855 Nov 1 00:44:43.788907 env[1565]: 2025-11-01 00:44:43.750 [INFO][5039] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.3.64/26 handle="k8s-pod-network.b3f03cd150e69fcb73547d36c314a7c005b5233fa812b88df30d275ccc7dc855" host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:43.788907 env[1565]: 2025-11-01 00:44:43.754 [INFO][5039] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.3.70/26] block=192.168.3.64/26 handle="k8s-pod-network.b3f03cd150e69fcb73547d36c314a7c005b5233fa812b88df30d275ccc7dc855" host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:43.788907 env[1565]: 2025-11-01 00:44:43.754 [INFO][5039] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.3.70/26] handle="k8s-pod-network.b3f03cd150e69fcb73547d36c314a7c005b5233fa812b88df30d275ccc7dc855" host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:43.788907 env[1565]: 2025-11-01 00:44:43.754 [INFO][5039] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:43.788907 env[1565]: 2025-11-01 00:44:43.754 [INFO][5039] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.3.70/26] IPv6=[] ContainerID="b3f03cd150e69fcb73547d36c314a7c005b5233fa812b88df30d275ccc7dc855" HandleID="k8s-pod-network.b3f03cd150e69fcb73547d36c314a7c005b5233fa812b88df30d275ccc7dc855" Workload="ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--86gd2-eth0" Nov 1 00:44:43.789452 env[1565]: 2025-11-01 00:44:43.755 [INFO][5004] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b3f03cd150e69fcb73547d36c314a7c005b5233fa812b88df30d275ccc7dc855" Namespace="kube-system" Pod="coredns-668d6bf9bc-86gd2" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--86gd2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--86gd2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2eed7e34-9d6a-4f7a-a712-82a7a599696d", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 44, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-3bc793b712", ContainerID:"", Pod:"coredns-668d6bf9bc-86gd2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia9db2ac3380", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:44:43.789452 env[1565]: 2025-11-01 00:44:43.755 [INFO][5004] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.3.70/32] ContainerID="b3f03cd150e69fcb73547d36c314a7c005b5233fa812b88df30d275ccc7dc855" Namespace="kube-system" Pod="coredns-668d6bf9bc-86gd2" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--86gd2-eth0" Nov 1 00:44:43.789452 env[1565]: 2025-11-01 00:44:43.755 [INFO][5004] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia9db2ac3380 ContainerID="b3f03cd150e69fcb73547d36c314a7c005b5233fa812b88df30d275ccc7dc855" Namespace="kube-system" Pod="coredns-668d6bf9bc-86gd2" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--86gd2-eth0" Nov 1 00:44:43.789452 env[1565]: 2025-11-01 00:44:43.782 [INFO][5004] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b3f03cd150e69fcb73547d36c314a7c005b5233fa812b88df30d275ccc7dc855" Namespace="kube-system" Pod="coredns-668d6bf9bc-86gd2" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--86gd2-eth0" Nov 1 00:44:43.789452 env[1565]: 2025-11-01 00:44:43.782 [INFO][5004] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b3f03cd150e69fcb73547d36c314a7c005b5233fa812b88df30d275ccc7dc855" Namespace="kube-system" Pod="coredns-668d6bf9bc-86gd2" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--86gd2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--86gd2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2eed7e34-9d6a-4f7a-a712-82a7a599696d", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 44, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-3bc793b712", ContainerID:"b3f03cd150e69fcb73547d36c314a7c005b5233fa812b88df30d275ccc7dc855", Pod:"coredns-668d6bf9bc-86gd2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia9db2ac3380", MAC:"1a:58:6e:d1:1a:ab", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:44:43.789452 env[1565]: 2025-11-01 00:44:43.787 [INFO][5004] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b3f03cd150e69fcb73547d36c314a7c005b5233fa812b88df30d275ccc7dc855" Namespace="kube-system" Pod="coredns-668d6bf9bc-86gd2" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--86gd2-eth0" Nov 1 00:44:43.794323 env[1565]: time="2025-11-01T00:44:43.794285621Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:44:43.794323 env[1565]: time="2025-11-01T00:44:43.794309241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:44:43.794323 env[1565]: time="2025-11-01T00:44:43.794319756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:44:43.794431 env[1565]: time="2025-11-01T00:44:43.794393805Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b3f03cd150e69fcb73547d36c314a7c005b5233fa812b88df30d275ccc7dc855 pid=5129 runtime=io.containerd.runc.v2 Nov 1 00:44:43.796000 audit[5140]: NETFILTER_CFG table=filter:115 family=2 entries=58 op=nft_register_chain pid=5140 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:44:43.796000 audit[5140]: SYSCALL arch=c000003e syscall=46 success=yes exit=27304 a0=3 a1=7ffcad717810 a2=0 a3=7ffcad7177fc items=0 ppid=4234 pid=5140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:43.796000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:44:43.799846 systemd[1]: Started cri-containerd-b3f03cd150e69fcb73547d36c314a7c005b5233fa812b88df30d275ccc7dc855.scope. Nov 1 00:44:43.805000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.805000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.805000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.805000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.805000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.805000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.805000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.805000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.805000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.806000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.806000 audit: BPF prog-id=178 op=LOAD Nov 1 00:44:43.806000 audit[5139]: AVC avc: denied { bpf } for pid=5139 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.806000 audit[5139]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=5129 pid=5139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:43.806000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233663033636431353065363966636237333534376433366333313461 Nov 1 00:44:43.806000 audit[5139]: AVC avc: denied { perfmon } for pid=5139 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.806000 audit[5139]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=5129 pid=5139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:43.806000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233663033636431353065363966636237333534376433366333313461 Nov 1 00:44:43.806000 audit[5139]: AVC avc: denied { bpf } for pid=5139 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.806000 audit[5139]: AVC avc: denied { bpf } for pid=5139 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.806000 audit[5139]: AVC avc: denied { bpf } for pid=5139 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.806000 audit[5139]: AVC avc: denied { perfmon } for pid=5139 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.806000 audit[5139]: AVC avc: denied { perfmon } for pid=5139 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.806000 audit[5139]: AVC avc: denied { perfmon } for pid=5139 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.806000 audit[5139]: AVC avc: denied { perfmon } for pid=5139 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.806000 audit[5139]: AVC avc: denied { perfmon } for pid=5139 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.806000 audit[5139]: AVC avc: denied { bpf } for pid=5139 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.806000 audit[5139]: AVC avc: denied { bpf } for pid=5139 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.806000 audit: BPF prog-id=179 op=LOAD Nov 1 00:44:43.806000 audit[5139]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c0002f1d90 items=0 ppid=5129 pid=5139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:43.806000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233663033636431353065363966636237333534376433366333313461 Nov 1 00:44:43.806000 audit[5139]: AVC avc: denied { bpf } for pid=5139 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.806000 audit[5139]: AVC avc: denied { bpf } for pid=5139 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.806000 audit[5139]: AVC avc: denied { perfmon } for pid=5139 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.806000 audit[5139]: AVC avc: denied { perfmon } for pid=5139 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.806000 audit[5139]: AVC avc: denied { perfmon } for pid=5139 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.806000 audit[5139]: AVC avc: denied { perfmon } for pid=5139 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.806000 audit[5139]: AVC avc: denied { perfmon } for pid=5139 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.806000 audit[5139]: AVC avc: denied { bpf } for pid=5139 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.806000 audit[5139]: AVC avc: denied { bpf } for pid=5139 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.806000 audit: BPF prog-id=180 op=LOAD Nov 1 00:44:43.806000 audit[5139]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c0002f1dd8 items=0 ppid=5129 pid=5139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:43.806000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233663033636431353065363966636237333534376433366333313461 Nov 1 00:44:43.806000 audit: BPF prog-id=180 op=UNLOAD Nov 1 00:44:43.806000 audit: BPF prog-id=179 op=UNLOAD Nov 1 00:44:43.806000 audit[5139]: AVC avc: denied { bpf } for pid=5139 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.806000 audit[5139]: AVC avc: denied { bpf } for pid=5139 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.806000 audit[5139]: AVC avc: denied { bpf } for pid=5139 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.806000 audit[5139]: AVC avc: denied { perfmon } for pid=5139 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.806000 audit[5139]: AVC avc: denied { perfmon } for pid=5139 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.806000 audit[5139]: AVC avc: denied { perfmon } for pid=5139 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.806000 audit[5139]: AVC avc: denied { perfmon } for pid=5139 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.806000 audit[5139]: AVC avc: denied { perfmon } for pid=5139 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.806000 audit[5139]: AVC avc: denied { bpf } for pid=5139 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.806000 audit[5139]: AVC avc: denied { bpf } for pid=5139 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.806000 audit: BPF prog-id=181 op=LOAD Nov 1 00:44:43.806000 audit[5139]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c0003e41e8 items=0 ppid=5129 pid=5139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:43.806000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233663033636431353065363966636237333534376433366333313461 Nov 1 00:44:43.823474 env[1565]: time="2025-11-01T00:44:43.823421527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-86gd2,Uid:2eed7e34-9d6a-4f7a-a712-82a7a599696d,Namespace:kube-system,Attempt:1,} returns sandbox id \"b3f03cd150e69fcb73547d36c314a7c005b5233fa812b88df30d275ccc7dc855\"" Nov 1 00:44:43.824566 env[1565]: time="2025-11-01T00:44:43.824520973Z" level=info msg="CreateContainer within sandbox \"b3f03cd150e69fcb73547d36c314a7c005b5233fa812b88df30d275ccc7dc855\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:44:43.828883 env[1565]: time="2025-11-01T00:44:43.828840983Z" level=info msg="CreateContainer within sandbox \"b3f03cd150e69fcb73547d36c314a7c005b5233fa812b88df30d275ccc7dc855\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cd0f6c157b4cac5918ac23585c11d848d8f85853a2de55de4a47853a149eccf5\"" Nov 1 00:44:43.829047 env[1565]: time="2025-11-01T00:44:43.829003890Z" level=info msg="StartContainer for \"cd0f6c157b4cac5918ac23585c11d848d8f85853a2de55de4a47853a149eccf5\"" Nov 1 00:44:43.836220 systemd[1]: Started cri-containerd-cd0f6c157b4cac5918ac23585c11d848d8f85853a2de55de4a47853a149eccf5.scope. Nov 1 00:44:43.842000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.842000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.842000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.842000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.842000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.842000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.842000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.842000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.842000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.842000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.842000 audit: BPF prog-id=182 op=LOAD Nov 1 00:44:43.842000 audit[5172]: AVC avc: denied { bpf } for pid=5172 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.842000 audit[5172]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000145c48 a2=10 a3=1c items=0 ppid=5129 pid=5172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:43.842000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364306636633135376234636163353931386163323335383563313164 Nov 1 00:44:43.842000 audit[5172]: AVC avc: denied { perfmon } for pid=5172 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.842000 audit[5172]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001456b0 a2=3c a3=8 items=0 ppid=5129 pid=5172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:43.842000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364306636633135376234636163353931386163323335383563313164 Nov 1 00:44:43.842000 audit[5172]: AVC avc: denied { bpf } for pid=5172 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.842000 audit[5172]: AVC avc: denied { bpf } for pid=5172 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.842000 audit[5172]: AVC avc: denied { bpf } for pid=5172 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.842000 audit[5172]: AVC avc: denied { perfmon } for pid=5172 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.842000 audit[5172]: AVC avc: denied { perfmon } for pid=5172 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.842000 audit[5172]: AVC avc: denied { perfmon } for pid=5172 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.842000 audit[5172]: AVC avc: denied { perfmon } for pid=5172 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.842000 audit[5172]: AVC avc: denied { perfmon } for pid=5172 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.842000 audit[5172]: AVC avc: denied { bpf } for pid=5172 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.842000 audit[5172]: AVC avc: denied { bpf } for pid=5172 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.842000 audit: BPF prog-id=183 op=LOAD Nov 1 00:44:43.842000 audit[5172]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001459d8 a2=78 a3=c000290690 items=0 ppid=5129 pid=5172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:43.842000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364306636633135376234636163353931386163323335383563313164 Nov 1 00:44:43.843000 audit[5172]: AVC avc: denied { bpf } for pid=5172 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.843000 audit[5172]: AVC avc: denied { bpf } for pid=5172 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.843000 audit[5172]: AVC avc: denied { perfmon } for pid=5172 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.843000 audit[5172]: AVC avc: denied { perfmon } for pid=5172 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.843000 audit[5172]: AVC avc: denied { perfmon } for pid=5172 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.843000 audit[5172]: AVC avc: denied { perfmon } for pid=5172 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.843000 audit[5172]: AVC avc: denied { perfmon } for pid=5172 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.843000 audit[5172]: AVC avc: denied { bpf } for pid=5172 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.843000 audit[5172]: AVC avc: denied { bpf } for pid=5172 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.843000 audit: BPF prog-id=184 op=LOAD Nov 1 00:44:43.843000 audit[5172]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000145770 a2=78 a3=c0002906d8 items=0 ppid=5129 pid=5172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:43.843000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364306636633135376234636163353931386163323335383563313164 Nov 1 00:44:43.843000 audit: BPF prog-id=184 op=UNLOAD Nov 1 00:44:43.843000 audit: BPF prog-id=183 op=UNLOAD Nov 1 00:44:43.843000 audit[5172]: AVC avc: denied { bpf } for pid=5172 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.843000 audit[5172]: AVC avc: denied { bpf } for pid=5172 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.843000 audit[5172]: AVC avc: denied { bpf } for pid=5172 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.843000 audit[5172]: AVC avc: denied { perfmon } for pid=5172 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.843000 audit[5172]: AVC avc: denied { perfmon } for pid=5172 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.843000 audit[5172]: AVC avc: denied { perfmon } for pid=5172 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.843000 audit[5172]: AVC avc: denied { perfmon } for pid=5172 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.843000 audit[5172]: AVC avc: denied { perfmon } for pid=5172 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.843000 audit[5172]: AVC avc: denied { bpf } for pid=5172 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.843000 audit[5172]: AVC avc: denied { bpf } for pid=5172 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:43.843000 audit: BPF prog-id=185 op=LOAD Nov 1 00:44:43.843000 audit[5172]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000145c30 a2=78 a3=c000290ae8 items=0 ppid=5129 pid=5172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:43.843000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364306636633135376234636163353931386163323335383563313164 Nov 1 00:44:43.848967 env[1565]: time="2025-11-01T00:44:43.848943596Z" level=info msg="StartContainer for \"cd0f6c157b4cac5918ac23585c11d848d8f85853a2de55de4a47853a149eccf5\" returns successfully" Nov 1 00:44:43.853000 audit[5182]: AVC avc: denied { getattr } for pid=5182 comm="coredns" path="cgroup:[4026532874]" dev="nsfs" ino=4026532874 scontext=system_u:system_r:svirt_lxc_net_t:s0:c299,c817 tcontext=system_u:object_r:nsfs_t:s0 tclass=file permissive=0 Nov 1 00:44:43.853000 audit[5182]: SYSCALL arch=c000003e syscall=262 success=no exit=-13 a0=ffffffffffffff9c a1=c000528888 a2=c00052ee08 a3=0 items=0 ppid=5129 pid=5182 auid=4294967295 uid=65532 gid=65532 euid=65532 suid=65532 fsuid=65532 egid=65532 sgid=65532 fsgid=65532 tty=(none) ses=4294967295 comm="coredns" exe="/coredns" subj=system_u:system_r:svirt_lxc_net_t:s0:c299,c817 key=(null) Nov 1 00:44:43.853000 audit: PROCTITLE proctitle=2F636F7265646E73002D636F6E66002F6574632F636F7265646E732F436F726566696C65 Nov 1 00:44:44.127007 env[1565]: time="2025-11-01T00:44:44.126768653Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:44.128090 env[1565]: time="2025-11-01T00:44:44.127704817Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:44:44.128290 kubelet[2501]: E1101 00:44:44.128198 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:44:44.129176 kubelet[2501]: E1101 00:44:44.128306 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:44:44.129176 kubelet[2501]: E1101 00:44:44.128644 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-48hx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d69b6c6c-vws84_calico-apiserver(af4cf953-58b5-4727-a1f5-dcd340748032): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:44.130147 kubelet[2501]: E1101 00:44:44.130063 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-vws84" podUID="af4cf953-58b5-4727-a1f5-dcd340748032" Nov 1 00:44:44.530558 env[1565]: time="2025-11-01T00:44:44.530451015Z" level=info msg="StopPodSandbox for \"9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f\"" Nov 1 00:44:44.530558 env[1565]: time="2025-11-01T00:44:44.530451498Z" level=info msg="StopPodSandbox for \"3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516\"" Nov 1 00:44:44.582102 env[1565]: 2025-11-01 00:44:44.564 [INFO][5237] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f" Nov 1 00:44:44.582102 env[1565]: 2025-11-01 00:44:44.564 [INFO][5237] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f" iface="eth0" netns="/var/run/netns/cni-4ffc199c-3b44-284e-7016-e0e417932b94" Nov 1 00:44:44.582102 env[1565]: 2025-11-01 00:44:44.564 [INFO][5237] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f" iface="eth0" netns="/var/run/netns/cni-4ffc199c-3b44-284e-7016-e0e417932b94" Nov 1 00:44:44.582102 env[1565]: 2025-11-01 00:44:44.564 [INFO][5237] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f" iface="eth0" netns="/var/run/netns/cni-4ffc199c-3b44-284e-7016-e0e417932b94" Nov 1 00:44:44.582102 env[1565]: 2025-11-01 00:44:44.564 [INFO][5237] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f" Nov 1 00:44:44.582102 env[1565]: 2025-11-01 00:44:44.564 [INFO][5237] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f" Nov 1 00:44:44.582102 env[1565]: 2025-11-01 00:44:44.574 [INFO][5268] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f" HandleID="k8s-pod-network.9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f" Workload="ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--4lsqg-eth0" Nov 1 00:44:44.582102 env[1565]: 2025-11-01 00:44:44.574 [INFO][5268] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:44.582102 env[1565]: 2025-11-01 00:44:44.574 [INFO][5268] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:44.582102 env[1565]: 2025-11-01 00:44:44.579 [WARNING][5268] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f" HandleID="k8s-pod-network.9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f" Workload="ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--4lsqg-eth0" Nov 1 00:44:44.582102 env[1565]: 2025-11-01 00:44:44.579 [INFO][5268] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f" HandleID="k8s-pod-network.9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f" Workload="ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--4lsqg-eth0" Nov 1 00:44:44.582102 env[1565]: 2025-11-01 00:44:44.580 [INFO][5268] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:44.582102 env[1565]: 2025-11-01 00:44:44.581 [INFO][5237] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f" Nov 1 00:44:44.582398 env[1565]: time="2025-11-01T00:44:44.582144059Z" level=info msg="TearDown network for sandbox \"9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f\" successfully" Nov 1 00:44:44.582398 env[1565]: time="2025-11-01T00:44:44.582164740Z" level=info msg="StopPodSandbox for \"9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f\" returns successfully" Nov 1 00:44:44.582578 env[1565]: time="2025-11-01T00:44:44.582561633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4lsqg,Uid:027d8f2f-3806-4651-bd8a-463d116e5266,Namespace:kube-system,Attempt:1,}" Nov 1 00:44:44.587164 env[1565]: 2025-11-01 00:44:44.564 [INFO][5236] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516" Nov 1 00:44:44.587164 env[1565]: 2025-11-01 00:44:44.564 [INFO][5236] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516" iface="eth0" netns="/var/run/netns/cni-f80aae64-1df7-78a6-a9f2-a1b3580a807b" Nov 1 00:44:44.587164 env[1565]: 2025-11-01 00:44:44.564 [INFO][5236] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516" iface="eth0" netns="/var/run/netns/cni-f80aae64-1df7-78a6-a9f2-a1b3580a807b" Nov 1 00:44:44.587164 env[1565]: 2025-11-01 00:44:44.564 [INFO][5236] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516" iface="eth0" netns="/var/run/netns/cni-f80aae64-1df7-78a6-a9f2-a1b3580a807b" Nov 1 00:44:44.587164 env[1565]: 2025-11-01 00:44:44.564 [INFO][5236] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516" Nov 1 00:44:44.587164 env[1565]: 2025-11-01 00:44:44.564 [INFO][5236] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516" Nov 1 00:44:44.587164 env[1565]: 2025-11-01 00:44:44.574 [INFO][5270] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516" HandleID="k8s-pod-network.3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516" Workload="ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--s7kdv-eth0" Nov 1 00:44:44.587164 env[1565]: 2025-11-01 00:44:44.574 [INFO][5270] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:44.587164 env[1565]: 2025-11-01 00:44:44.580 [INFO][5270] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:44.587164 env[1565]: 2025-11-01 00:44:44.584 [WARNING][5270] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516" HandleID="k8s-pod-network.3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516" Workload="ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--s7kdv-eth0" Nov 1 00:44:44.587164 env[1565]: 2025-11-01 00:44:44.584 [INFO][5270] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516" HandleID="k8s-pod-network.3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516" Workload="ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--s7kdv-eth0" Nov 1 00:44:44.587164 env[1565]: 2025-11-01 00:44:44.585 [INFO][5270] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:44.587164 env[1565]: 2025-11-01 00:44:44.586 [INFO][5236] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516" Nov 1 00:44:44.587458 env[1565]: time="2025-11-01T00:44:44.587220874Z" level=info msg="TearDown network for sandbox \"3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516\" successfully" Nov 1 00:44:44.587458 env[1565]: time="2025-11-01T00:44:44.587238179Z" level=info msg="StopPodSandbox for \"3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516\" returns successfully" Nov 1 00:44:44.587584 env[1565]: time="2025-11-01T00:44:44.587569500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d69b6c6c-s7kdv,Uid:2f6bb8ca-d7c7-4d64-919c-e85097fdc068,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:44:44.596677 systemd[1]: run-netns-cni\x2df80aae64\x2d1df7\x2d78a6\x2da9f2\x2da1b3580a807b.mount: Deactivated successfully. Nov 1 00:44:44.596729 systemd[1]: run-netns-cni\x2d4ffc199c\x2d3b44\x2d284e\x2d7016\x2de0e417932b94.mount: Deactivated successfully. Nov 1 00:44:44.640602 systemd-networkd[1324]: cali10852593721: Link UP Nov 1 00:44:44.666505 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali10852593721: link becomes ready Nov 1 00:44:44.666774 systemd-networkd[1324]: cali10852593721: Gained carrier Nov 1 00:44:44.673118 env[1565]: 2025-11-01 00:44:44.604 [INFO][5305] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--4lsqg-eth0 coredns-668d6bf9bc- kube-system 027d8f2f-3806-4651-bd8a-463d116e5266 1005 0 2025-11-01 00:44:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.8-n-3bc793b712 coredns-668d6bf9bc-4lsqg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali10852593721 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="782bd3ee2d968568e244ceeb0d2d0c14242674f0eea18780a93582d2e2fa193d" Namespace="kube-system" Pod="coredns-668d6bf9bc-4lsqg" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--4lsqg-" Nov 1 00:44:44.673118 env[1565]: 2025-11-01 00:44:44.604 [INFO][5305] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="782bd3ee2d968568e244ceeb0d2d0c14242674f0eea18780a93582d2e2fa193d" Namespace="kube-system" Pod="coredns-668d6bf9bc-4lsqg" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--4lsqg-eth0" Nov 1 00:44:44.673118 env[1565]: 2025-11-01 00:44:44.617 [INFO][5353] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="782bd3ee2d968568e244ceeb0d2d0c14242674f0eea18780a93582d2e2fa193d" HandleID="k8s-pod-network.782bd3ee2d968568e244ceeb0d2d0c14242674f0eea18780a93582d2e2fa193d" Workload="ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--4lsqg-eth0" Nov 1 00:44:44.673118 env[1565]: 2025-11-01 00:44:44.617 [INFO][5353] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="782bd3ee2d968568e244ceeb0d2d0c14242674f0eea18780a93582d2e2fa193d" HandleID="k8s-pod-network.782bd3ee2d968568e244ceeb0d2d0c14242674f0eea18780a93582d2e2fa193d" Workload="ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--4lsqg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003dd2f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.8-n-3bc793b712", "pod":"coredns-668d6bf9bc-4lsqg", "timestamp":"2025-11-01 00:44:44.617256424 +0000 UTC"}, Hostname:"ci-3510.3.8-n-3bc793b712", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:44:44.673118 env[1565]: 2025-11-01 00:44:44.617 [INFO][5353] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:44.673118 env[1565]: 2025-11-01 00:44:44.617 [INFO][5353] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:44.673118 env[1565]: 2025-11-01 00:44:44.617 [INFO][5353] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-3bc793b712' Nov 1 00:44:44.673118 env[1565]: 2025-11-01 00:44:44.622 [INFO][5353] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.782bd3ee2d968568e244ceeb0d2d0c14242674f0eea18780a93582d2e2fa193d" host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:44.673118 env[1565]: 2025-11-01 00:44:44.625 [INFO][5353] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:44.673118 env[1565]: 2025-11-01 00:44:44.628 [INFO][5353] ipam/ipam.go 511: Trying affinity for 192.168.3.64/26 host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:44.673118 env[1565]: 2025-11-01 00:44:44.630 [INFO][5353] ipam/ipam.go 158: Attempting to load block cidr=192.168.3.64/26 host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:44.673118 env[1565]: 2025-11-01 00:44:44.631 [INFO][5353] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.3.64/26 host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:44.673118 env[1565]: 2025-11-01 00:44:44.631 [INFO][5353] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.3.64/26 handle="k8s-pod-network.782bd3ee2d968568e244ceeb0d2d0c14242674f0eea18780a93582d2e2fa193d" host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:44.673118 env[1565]: 2025-11-01 00:44:44.632 [INFO][5353] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.782bd3ee2d968568e244ceeb0d2d0c14242674f0eea18780a93582d2e2fa193d Nov 1 00:44:44.673118 env[1565]: 2025-11-01 00:44:44.635 [INFO][5353] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.3.64/26 handle="k8s-pod-network.782bd3ee2d968568e244ceeb0d2d0c14242674f0eea18780a93582d2e2fa193d" host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:44.673118 env[1565]: 2025-11-01 00:44:44.638 [INFO][5353] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.3.71/26] block=192.168.3.64/26 handle="k8s-pod-network.782bd3ee2d968568e244ceeb0d2d0c14242674f0eea18780a93582d2e2fa193d" host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:44.673118 env[1565]: 2025-11-01 00:44:44.638 [INFO][5353] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.3.71/26] handle="k8s-pod-network.782bd3ee2d968568e244ceeb0d2d0c14242674f0eea18780a93582d2e2fa193d" host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:44.673118 env[1565]: 2025-11-01 00:44:44.638 [INFO][5353] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:44.673118 env[1565]: 2025-11-01 00:44:44.638 [INFO][5353] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.3.71/26] IPv6=[] ContainerID="782bd3ee2d968568e244ceeb0d2d0c14242674f0eea18780a93582d2e2fa193d" HandleID="k8s-pod-network.782bd3ee2d968568e244ceeb0d2d0c14242674f0eea18780a93582d2e2fa193d" Workload="ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--4lsqg-eth0" Nov 1 00:44:44.673573 env[1565]: 2025-11-01 00:44:44.639 [INFO][5305] cni-plugin/k8s.go 418: Populated endpoint ContainerID="782bd3ee2d968568e244ceeb0d2d0c14242674f0eea18780a93582d2e2fa193d" Namespace="kube-system" Pod="coredns-668d6bf9bc-4lsqg" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--4lsqg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--4lsqg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"027d8f2f-3806-4651-bd8a-463d116e5266", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 44, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-3bc793b712", ContainerID:"", Pod:"coredns-668d6bf9bc-4lsqg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali10852593721", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:44:44.673573 env[1565]: 2025-11-01 00:44:44.639 [INFO][5305] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.3.71/32] ContainerID="782bd3ee2d968568e244ceeb0d2d0c14242674f0eea18780a93582d2e2fa193d" Namespace="kube-system" Pod="coredns-668d6bf9bc-4lsqg" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--4lsqg-eth0" Nov 1 00:44:44.673573 env[1565]: 2025-11-01 00:44:44.639 [INFO][5305] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali10852593721 ContainerID="782bd3ee2d968568e244ceeb0d2d0c14242674f0eea18780a93582d2e2fa193d" Namespace="kube-system" Pod="coredns-668d6bf9bc-4lsqg" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--4lsqg-eth0" Nov 1 00:44:44.673573 env[1565]: 2025-11-01 00:44:44.666 [INFO][5305] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="782bd3ee2d968568e244ceeb0d2d0c14242674f0eea18780a93582d2e2fa193d" Namespace="kube-system" Pod="coredns-668d6bf9bc-4lsqg" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--4lsqg-eth0" Nov 1 00:44:44.673573 env[1565]: 2025-11-01 00:44:44.666 [INFO][5305] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="782bd3ee2d968568e244ceeb0d2d0c14242674f0eea18780a93582d2e2fa193d" Namespace="kube-system" Pod="coredns-668d6bf9bc-4lsqg" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--4lsqg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--4lsqg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"027d8f2f-3806-4651-bd8a-463d116e5266", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 44, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-3bc793b712", ContainerID:"782bd3ee2d968568e244ceeb0d2d0c14242674f0eea18780a93582d2e2fa193d", Pod:"coredns-668d6bf9bc-4lsqg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali10852593721", MAC:"8e:c4:60:9f:63:d2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:44:44.673573 env[1565]: 2025-11-01 00:44:44.672 [INFO][5305] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="782bd3ee2d968568e244ceeb0d2d0c14242674f0eea18780a93582d2e2fa193d" Namespace="kube-system" Pod="coredns-668d6bf9bc-4lsqg" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--4lsqg-eth0" Nov 1 00:44:44.677949 env[1565]: time="2025-11-01T00:44:44.677914843Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:44:44.677949 env[1565]: time="2025-11-01T00:44:44.677936582Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:44:44.677949 env[1565]: time="2025-11-01T00:44:44.677943468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:44:44.678072 env[1565]: time="2025-11-01T00:44:44.678014582Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/782bd3ee2d968568e244ceeb0d2d0c14242674f0eea18780a93582d2e2fa193d pid=5398 runtime=io.containerd.runc.v2 Nov 1 00:44:44.680597 kubelet[2501]: E1101 00:44:44.680576 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-vws84" podUID="af4cf953-58b5-4727-a1f5-dcd340748032" Nov 1 00:44:44.680000 audit[5409]: NETFILTER_CFG table=filter:116 family=2 entries=52 op=nft_register_chain pid=5409 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:44:44.680000 audit[5409]: SYSCALL arch=c000003e syscall=46 success=yes exit=23908 a0=3 a1=7ffe47c4ca40 a2=0 a3=7ffe47c4ca2c items=0 ppid=4234 pid=5409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:44.680000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:44:44.686436 kubelet[2501]: I1101 00:44:44.686384 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-86gd2" podStartSLOduration=37.686367952 podStartE2EDuration="37.686367952s" podCreationTimestamp="2025-11-01 00:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:44:44.686052828 +0000 UTC m=+44.210982853" watchObservedRunningTime="2025-11-01 00:44:44.686367952 +0000 UTC m=+44.211297966" Nov 1 00:44:44.688475 systemd[1]: Started cri-containerd-782bd3ee2d968568e244ceeb0d2d0c14242674f0eea18780a93582d2e2fa193d.scope. Nov 1 00:44:44.693000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.693000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.693000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.693000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.693000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.693000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.693000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.693000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.693000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.693000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.693000 audit: BPF prog-id=186 op=LOAD Nov 1 00:44:44.694000 audit[5408]: AVC avc: denied { bpf } for pid=5408 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.694000 audit[5408]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000147c48 a2=10 a3=1c items=0 ppid=5398 pid=5408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:44.694000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738326264336565326439363835363865323434636565623064326430 Nov 1 00:44:44.694000 audit[5408]: AVC avc: denied { perfmon } for pid=5408 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.694000 audit[5408]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001476b0 a2=3c a3=c items=0 ppid=5398 pid=5408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:44.694000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738326264336565326439363835363865323434636565623064326430 Nov 1 00:44:44.694000 audit[5408]: AVC avc: denied { bpf } for pid=5408 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.694000 audit[5408]: AVC avc: denied { bpf } for pid=5408 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.694000 audit[5408]: AVC avc: denied { bpf } for pid=5408 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.694000 audit[5408]: AVC avc: denied { perfmon } for pid=5408 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.694000 audit[5408]: AVC avc: denied { perfmon } for pid=5408 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.694000 audit[5408]: AVC avc: denied { perfmon } for pid=5408 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.694000 audit[5408]: AVC avc: denied { perfmon } for pid=5408 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.694000 audit[5408]: AVC avc: denied { perfmon } for pid=5408 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.694000 audit[5408]: AVC avc: denied { bpf } for pid=5408 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.694000 audit[5408]: AVC avc: denied { bpf } for pid=5408 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.694000 audit: BPF prog-id=187 op=LOAD Nov 1 00:44:44.694000 audit[5408]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001479d8 a2=78 a3=c0001c8320 items=0 ppid=5398 pid=5408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:44.694000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738326264336565326439363835363865323434636565623064326430 Nov 1 00:44:44.694000 audit[5408]: AVC avc: denied { bpf } for pid=5408 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.694000 audit[5408]: AVC avc: denied { bpf } for pid=5408 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.694000 audit[5408]: AVC avc: denied { perfmon } for pid=5408 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.694000 audit[5408]: AVC avc: denied { perfmon } for pid=5408 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.694000 audit[5408]: AVC avc: denied { perfmon } for pid=5408 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.694000 audit[5408]: AVC avc: denied { perfmon } for pid=5408 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.694000 audit[5408]: AVC avc: denied { perfmon } for pid=5408 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.694000 audit[5408]: AVC avc: denied { bpf } for pid=5408 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.694000 audit[5408]: AVC avc: denied { bpf } for pid=5408 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.694000 audit: BPF prog-id=188 op=LOAD Nov 1 00:44:44.694000 audit[5408]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000147770 a2=78 a3=c0001c8368 items=0 ppid=5398 pid=5408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:44.694000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738326264336565326439363835363865323434636565623064326430 Nov 1 00:44:44.694000 audit: BPF prog-id=188 op=UNLOAD Nov 1 00:44:44.694000 audit: BPF prog-id=187 op=UNLOAD Nov 1 00:44:44.694000 audit[5408]: AVC avc: denied { bpf } for pid=5408 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.694000 audit[5408]: AVC avc: denied { bpf } for pid=5408 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.694000 audit[5408]: AVC avc: denied { bpf } for pid=5408 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.694000 audit[5408]: AVC avc: denied { perfmon } for pid=5408 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.694000 audit[5408]: AVC avc: denied { perfmon } for pid=5408 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.694000 audit[5408]: AVC avc: denied { perfmon } for pid=5408 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.694000 audit[5408]: AVC avc: denied { perfmon } for pid=5408 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.694000 audit[5408]: AVC avc: denied { perfmon } for pid=5408 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.694000 audit[5408]: AVC avc: denied { bpf } for pid=5408 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.694000 audit[5408]: AVC avc: denied { bpf } for pid=5408 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.694000 audit: BPF prog-id=189 op=LOAD Nov 1 00:44:44.694000 audit[5408]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000147c30 a2=78 a3=c0001c8778 items=0 ppid=5398 pid=5408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:44.694000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738326264336565326439363835363865323434636565623064326430 Nov 1 00:44:44.695000 audit[5426]: NETFILTER_CFG table=filter:117 family=2 entries=20 op=nft_register_rule pid=5426 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:44.695000 audit[5426]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe7f3f5de0 a2=0 a3=7ffe7f3f5dcc items=0 ppid=2683 pid=5426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:44.695000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:44.707000 audit[5426]: NETFILTER_CFG table=nat:118 family=2 entries=14 op=nft_register_rule pid=5426 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:44.707000 audit[5426]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe7f3f5de0 a2=0 a3=0 items=0 ppid=2683 pid=5426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:44.707000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:44.711386 env[1565]: time="2025-11-01T00:44:44.711363504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4lsqg,Uid:027d8f2f-3806-4651-bd8a-463d116e5266,Namespace:kube-system,Attempt:1,} returns sandbox id \"782bd3ee2d968568e244ceeb0d2d0c14242674f0eea18780a93582d2e2fa193d\"" Nov 1 00:44:44.712539 env[1565]: time="2025-11-01T00:44:44.712523115Z" level=info msg="CreateContainer within sandbox \"782bd3ee2d968568e244ceeb0d2d0c14242674f0eea18780a93582d2e2fa193d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:44:44.717090 env[1565]: time="2025-11-01T00:44:44.717048800Z" level=info msg="CreateContainer within sandbox \"782bd3ee2d968568e244ceeb0d2d0c14242674f0eea18780a93582d2e2fa193d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a2c1883ade112e183fed56882c2bf3a3c99334c593f85134ee57d91d9f6db995\"" Nov 1 00:44:44.717268 env[1565]: time="2025-11-01T00:44:44.717255192Z" level=info msg="StartContainer for \"a2c1883ade112e183fed56882c2bf3a3c99334c593f85134ee57d91d9f6db995\"" Nov 1 00:44:44.725326 systemd[1]: Started cri-containerd-a2c1883ade112e183fed56882c2bf3a3c99334c593f85134ee57d91d9f6db995.scope. Nov 1 00:44:44.728000 audit[5460]: NETFILTER_CFG table=filter:119 family=2 entries=17 op=nft_register_rule pid=5460 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:44.728000 audit[5460]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffff8ca9e70 a2=0 a3=7ffff8ca9e5c items=0 ppid=2683 pid=5460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:44.728000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:44.731000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.731000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.731000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.731000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.731000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.731000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.731000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.731000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.731000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.732000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.732000 audit: BPF prog-id=190 op=LOAD Nov 1 00:44:44.732000 audit[5443]: AVC avc: denied { bpf } for pid=5443 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.732000 audit[5443]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c0001bdc48 a2=10 a3=1c items=0 ppid=5398 pid=5443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:44.732000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132633138383361646531313265313833666564353638383263326266 Nov 1 00:44:44.732000 audit[5443]: AVC avc: denied { perfmon } for pid=5443 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.732000 audit[5443]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001bd6b0 a2=3c a3=8 items=0 ppid=5398 pid=5443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:44.732000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132633138383361646531313265313833666564353638383263326266 Nov 1 00:44:44.732000 audit[5443]: AVC avc: denied { bpf } for pid=5443 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.732000 audit[5443]: AVC avc: denied { bpf } for pid=5443 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.732000 audit[5443]: AVC avc: denied { bpf } for pid=5443 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.732000 audit[5443]: AVC avc: denied { perfmon } for pid=5443 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.732000 audit[5443]: AVC avc: denied { perfmon } for pid=5443 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.732000 audit[5443]: AVC avc: denied { perfmon } for pid=5443 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.732000 audit[5443]: AVC avc: denied { perfmon } for pid=5443 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.732000 audit[5443]: AVC avc: denied { perfmon } for pid=5443 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.732000 audit[5443]: AVC avc: denied { bpf } for pid=5443 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.732000 audit[5443]: AVC avc: denied { bpf } for pid=5443 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.732000 audit: BPF prog-id=191 op=LOAD Nov 1 00:44:44.732000 audit[5443]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001bd9d8 a2=78 a3=c000308d60 items=0 ppid=5398 pid=5443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:44.732000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132633138383361646531313265313833666564353638383263326266 Nov 1 00:44:44.732000 audit[5443]: AVC avc: denied { bpf } for pid=5443 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.732000 audit[5443]: AVC avc: denied { bpf } for pid=5443 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.732000 audit[5443]: AVC avc: denied { perfmon } for pid=5443 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.732000 audit[5443]: AVC avc: denied { perfmon } for pid=5443 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.732000 audit[5443]: AVC avc: denied { perfmon } for pid=5443 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.732000 audit[5443]: AVC avc: denied { perfmon } for pid=5443 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.732000 audit[5443]: AVC avc: denied { perfmon } for pid=5443 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.732000 audit[5443]: AVC avc: denied { bpf } for pid=5443 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.732000 audit[5443]: AVC avc: denied { bpf } for pid=5443 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.732000 audit: BPF prog-id=192 op=LOAD Nov 1 00:44:44.732000 audit[5443]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001bd770 a2=78 a3=c000308da8 items=0 ppid=5398 pid=5443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:44.732000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132633138383361646531313265313833666564353638383263326266 Nov 1 00:44:44.732000 audit: BPF prog-id=192 op=UNLOAD Nov 1 00:44:44.732000 audit: BPF prog-id=191 op=UNLOAD Nov 1 00:44:44.732000 audit[5443]: AVC avc: denied { bpf } for pid=5443 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.732000 audit[5443]: AVC avc: denied { bpf } for pid=5443 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.732000 audit[5443]: AVC avc: denied { bpf } for pid=5443 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.732000 audit[5443]: AVC avc: denied { perfmon } for pid=5443 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.732000 audit[5443]: AVC avc: denied { perfmon } for pid=5443 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.732000 audit[5443]: AVC avc: denied { perfmon } for pid=5443 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.732000 audit[5443]: AVC avc: denied { perfmon } for pid=5443 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.732000 audit[5443]: AVC avc: denied { perfmon } for pid=5443 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.732000 audit[5443]: AVC avc: denied { bpf } for pid=5443 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.732000 audit[5443]: AVC avc: denied { bpf } for pid=5443 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.732000 audit: BPF prog-id=193 op=LOAD Nov 1 00:44:44.732000 audit[5443]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001bdc30 a2=78 a3=c0003091b8 items=0 ppid=5398 pid=5443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:44.732000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132633138383361646531313265313833666564353638383263326266 Nov 1 00:44:44.737000 audit[5460]: NETFILTER_CFG table=nat:120 family=2 entries=35 op=nft_register_chain pid=5460 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:44.737000 audit[5460]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffff8ca9e70 a2=0 a3=7ffff8ca9e5c items=0 ppid=2683 pid=5460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:44.737000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:44.738504 env[1565]: time="2025-11-01T00:44:44.738473189Z" level=info msg="StartContainer for \"a2c1883ade112e183fed56882c2bf3a3c99334c593f85134ee57d91d9f6db995\" returns successfully" Nov 1 00:44:44.743000 audit[5459]: AVC avc: denied { getattr } for pid=5459 comm="coredns" path="cgroup:[4026532759]" dev="nsfs" ino=4026532759 scontext=system_u:system_r:svirt_lxc_net_t:s0:c163,c570 tcontext=system_u:object_r:nsfs_t:s0 tclass=file permissive=0 Nov 1 00:44:44.743000 audit[5459]: SYSCALL arch=c000003e syscall=262 success=no exit=-13 a0=ffffffffffffff9c a1=c000058960 a2=c000906b98 a3=0 items=0 ppid=5398 pid=5459 auid=4294967295 uid=65532 gid=65532 euid=65532 suid=65532 fsuid=65532 egid=65532 sgid=65532 fsgid=65532 tty=(none) ses=4294967295 comm="coredns" exe="/coredns" subj=system_u:system_r:svirt_lxc_net_t:s0:c163,c570 key=(null) Nov 1 00:44:44.743000 audit: PROCTITLE proctitle=2F636F7265646E73002D636F6E66002F6574632F636F7265646E732F436F726566696C65 Nov 1 00:44:44.744269 systemd-networkd[1324]: cali7f78c61e7cd: Link UP Nov 1 00:44:44.793865 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 00:44:44.793935 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali7f78c61e7cd: link becomes ready Nov 1 00:44:44.794199 systemd-networkd[1324]: cali7f78c61e7cd: Gained carrier Nov 1 00:44:44.798620 systemd-networkd[1324]: cali3e033247e8b: Gained IPv6LL Nov 1 00:44:44.801570 env[1565]: 2025-11-01 00:44:44.608 [INFO][5321] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--s7kdv-eth0 calico-apiserver-5d69b6c6c- calico-apiserver 2f6bb8ca-d7c7-4d64-919c-e85097fdc068 1006 0 2025-11-01 00:44:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d69b6c6c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.8-n-3bc793b712 calico-apiserver-5d69b6c6c-s7kdv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7f78c61e7cd [] [] }} ContainerID="37c97190977c3a41bbb24b94a93a1cc29ed7afe833fe226759a203225b1077d8" Namespace="calico-apiserver" Pod="calico-apiserver-5d69b6c6c-s7kdv" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--s7kdv-" Nov 1 00:44:44.801570 env[1565]: 2025-11-01 00:44:44.608 [INFO][5321] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="37c97190977c3a41bbb24b94a93a1cc29ed7afe833fe226759a203225b1077d8" Namespace="calico-apiserver" Pod="calico-apiserver-5d69b6c6c-s7kdv" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--s7kdv-eth0" Nov 1 00:44:44.801570 env[1565]: 2025-11-01 00:44:44.620 [INFO][5358] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="37c97190977c3a41bbb24b94a93a1cc29ed7afe833fe226759a203225b1077d8" HandleID="k8s-pod-network.37c97190977c3a41bbb24b94a93a1cc29ed7afe833fe226759a203225b1077d8" Workload="ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--s7kdv-eth0" Nov 1 00:44:44.801570 env[1565]: 2025-11-01 00:44:44.620 [INFO][5358] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="37c97190977c3a41bbb24b94a93a1cc29ed7afe833fe226759a203225b1077d8" HandleID="k8s-pod-network.37c97190977c3a41bbb24b94a93a1cc29ed7afe833fe226759a203225b1077d8" Workload="ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--s7kdv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003d5350), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.8-n-3bc793b712", "pod":"calico-apiserver-5d69b6c6c-s7kdv", "timestamp":"2025-11-01 00:44:44.620346462 +0000 UTC"}, Hostname:"ci-3510.3.8-n-3bc793b712", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:44:44.801570 env[1565]: 2025-11-01 00:44:44.620 [INFO][5358] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:44.801570 env[1565]: 2025-11-01 00:44:44.638 [INFO][5358] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:44.801570 env[1565]: 2025-11-01 00:44:44.638 [INFO][5358] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-3bc793b712' Nov 1 00:44:44.801570 env[1565]: 2025-11-01 00:44:44.722 [INFO][5358] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.37c97190977c3a41bbb24b94a93a1cc29ed7afe833fe226759a203225b1077d8" host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:44.801570 env[1565]: 2025-11-01 00:44:44.728 [INFO][5358] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:44.801570 env[1565]: 2025-11-01 00:44:44.731 [INFO][5358] ipam/ipam.go 511: Trying affinity for 192.168.3.64/26 host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:44.801570 env[1565]: 2025-11-01 00:44:44.732 [INFO][5358] ipam/ipam.go 158: Attempting to load block cidr=192.168.3.64/26 host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:44.801570 env[1565]: 2025-11-01 00:44:44.734 [INFO][5358] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.3.64/26 host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:44.801570 env[1565]: 2025-11-01 00:44:44.734 [INFO][5358] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.3.64/26 handle="k8s-pod-network.37c97190977c3a41bbb24b94a93a1cc29ed7afe833fe226759a203225b1077d8" host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:44.801570 env[1565]: 2025-11-01 00:44:44.735 [INFO][5358] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.37c97190977c3a41bbb24b94a93a1cc29ed7afe833fe226759a203225b1077d8 Nov 1 00:44:44.801570 env[1565]: 2025-11-01 00:44:44.738 [INFO][5358] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.3.64/26 handle="k8s-pod-network.37c97190977c3a41bbb24b94a93a1cc29ed7afe833fe226759a203225b1077d8" host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:44.801570 env[1565]: 2025-11-01 00:44:44.742 [INFO][5358] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.3.72/26] block=192.168.3.64/26 handle="k8s-pod-network.37c97190977c3a41bbb24b94a93a1cc29ed7afe833fe226759a203225b1077d8" host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:44.801570 env[1565]: 2025-11-01 00:44:44.742 [INFO][5358] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.3.72/26] handle="k8s-pod-network.37c97190977c3a41bbb24b94a93a1cc29ed7afe833fe226759a203225b1077d8" host="ci-3510.3.8-n-3bc793b712" Nov 1 00:44:44.801570 env[1565]: 2025-11-01 00:44:44.742 [INFO][5358] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:44.801570 env[1565]: 2025-11-01 00:44:44.742 [INFO][5358] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.3.72/26] IPv6=[] ContainerID="37c97190977c3a41bbb24b94a93a1cc29ed7afe833fe226759a203225b1077d8" HandleID="k8s-pod-network.37c97190977c3a41bbb24b94a93a1cc29ed7afe833fe226759a203225b1077d8" Workload="ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--s7kdv-eth0" Nov 1 00:44:44.802031 env[1565]: 2025-11-01 00:44:44.743 [INFO][5321] cni-plugin/k8s.go 418: Populated endpoint ContainerID="37c97190977c3a41bbb24b94a93a1cc29ed7afe833fe226759a203225b1077d8" Namespace="calico-apiserver" Pod="calico-apiserver-5d69b6c6c-s7kdv" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--s7kdv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--s7kdv-eth0", GenerateName:"calico-apiserver-5d69b6c6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"2f6bb8ca-d7c7-4d64-919c-e85097fdc068", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 44, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d69b6c6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-3bc793b712", ContainerID:"", Pod:"calico-apiserver-5d69b6c6c-s7kdv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7f78c61e7cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:44:44.802031 env[1565]: 2025-11-01 00:44:44.743 [INFO][5321] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.3.72/32] ContainerID="37c97190977c3a41bbb24b94a93a1cc29ed7afe833fe226759a203225b1077d8" Namespace="calico-apiserver" Pod="calico-apiserver-5d69b6c6c-s7kdv" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--s7kdv-eth0" Nov 1 00:44:44.802031 env[1565]: 2025-11-01 00:44:44.743 [INFO][5321] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7f78c61e7cd ContainerID="37c97190977c3a41bbb24b94a93a1cc29ed7afe833fe226759a203225b1077d8" Namespace="calico-apiserver" Pod="calico-apiserver-5d69b6c6c-s7kdv" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--s7kdv-eth0" Nov 1 00:44:44.802031 env[1565]: 2025-11-01 00:44:44.794 [INFO][5321] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="37c97190977c3a41bbb24b94a93a1cc29ed7afe833fe226759a203225b1077d8" Namespace="calico-apiserver" Pod="calico-apiserver-5d69b6c6c-s7kdv" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--s7kdv-eth0" Nov 1 00:44:44.802031 env[1565]: 2025-11-01 00:44:44.794 [INFO][5321] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="37c97190977c3a41bbb24b94a93a1cc29ed7afe833fe226759a203225b1077d8" Namespace="calico-apiserver" Pod="calico-apiserver-5d69b6c6c-s7kdv" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--s7kdv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--s7kdv-eth0", GenerateName:"calico-apiserver-5d69b6c6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"2f6bb8ca-d7c7-4d64-919c-e85097fdc068", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 44, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d69b6c6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-3bc793b712", ContainerID:"37c97190977c3a41bbb24b94a93a1cc29ed7afe833fe226759a203225b1077d8", Pod:"calico-apiserver-5d69b6c6c-s7kdv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7f78c61e7cd", MAC:"ce:e9:3a:82:bf:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:44:44.802031 env[1565]: 2025-11-01 00:44:44.800 [INFO][5321] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="37c97190977c3a41bbb24b94a93a1cc29ed7afe833fe226759a203225b1077d8" Namespace="calico-apiserver" Pod="calico-apiserver-5d69b6c6c-s7kdv" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--s7kdv-eth0" Nov 1 00:44:44.807195 env[1565]: time="2025-11-01T00:44:44.807162711Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:44:44.807195 env[1565]: time="2025-11-01T00:44:44.807183747Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:44:44.807195 env[1565]: time="2025-11-01T00:44:44.807190666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:44:44.807333 env[1565]: time="2025-11-01T00:44:44.807254633Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/37c97190977c3a41bbb24b94a93a1cc29ed7afe833fe226759a203225b1077d8 pid=5505 runtime=io.containerd.runc.v2 Nov 1 00:44:44.809000 audit[5515]: NETFILTER_CFG table=filter:121 family=2 entries=61 op=nft_register_chain pid=5515 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:44:44.809000 audit[5515]: SYSCALL arch=c000003e syscall=46 success=yes exit=29016 a0=3 a1=7fff33482180 a2=0 a3=7fff3348216c items=0 ppid=4234 pid=5515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:44.809000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:44:44.812900 systemd[1]: Started cri-containerd-37c97190977c3a41bbb24b94a93a1cc29ed7afe833fe226759a203225b1077d8.scope. Nov 1 00:44:44.819000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.819000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.819000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.819000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.819000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.819000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.819000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.819000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.819000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.819000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.819000 audit: BPF prog-id=194 op=LOAD Nov 1 00:44:44.819000 audit[5514]: AVC avc: denied { bpf } for pid=5514 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.819000 audit[5514]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=5505 pid=5514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:44.819000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337633937313930393737633361343162626232346239346139336131 Nov 1 00:44:44.819000 audit[5514]: AVC avc: denied { perfmon } for pid=5514 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.819000 audit[5514]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=5505 pid=5514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:44.819000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337633937313930393737633361343162626232346239346139336131 Nov 1 00:44:44.819000 audit[5514]: AVC avc: denied { bpf } for pid=5514 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.819000 audit[5514]: AVC avc: denied { bpf } for pid=5514 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.819000 audit[5514]: AVC avc: denied { bpf } for pid=5514 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.819000 audit[5514]: AVC avc: denied { perfmon } for pid=5514 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.819000 audit[5514]: AVC avc: denied { perfmon } for pid=5514 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.819000 audit[5514]: AVC avc: denied { perfmon } for pid=5514 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.819000 audit[5514]: AVC avc: denied { perfmon } for pid=5514 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.819000 audit[5514]: AVC avc: denied { perfmon } for pid=5514 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.819000 audit[5514]: AVC avc: denied { bpf } for pid=5514 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.819000 audit[5514]: AVC avc: denied { bpf } for pid=5514 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.819000 audit: BPF prog-id=195 op=LOAD Nov 1 00:44:44.819000 audit[5514]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c000231b10 items=0 ppid=5505 pid=5514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:44.819000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337633937313930393737633361343162626232346239346139336131 Nov 1 00:44:44.819000 audit[5514]: AVC avc: denied { bpf } for pid=5514 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.819000 audit[5514]: AVC avc: denied { bpf } for pid=5514 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.819000 audit[5514]: AVC avc: denied { perfmon } for pid=5514 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.819000 audit[5514]: AVC avc: denied { perfmon } for pid=5514 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.819000 audit[5514]: AVC avc: denied { perfmon } for pid=5514 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.819000 audit[5514]: AVC avc: denied { perfmon } for pid=5514 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.819000 audit[5514]: AVC avc: denied { perfmon } for pid=5514 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.819000 audit[5514]: AVC avc: denied { bpf } for pid=5514 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.819000 audit[5514]: AVC avc: denied { bpf } for pid=5514 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.819000 audit: BPF prog-id=196 op=LOAD Nov 1 00:44:44.819000 audit[5514]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c000231b58 items=0 ppid=5505 pid=5514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:44.819000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337633937313930393737633361343162626232346239346139336131 Nov 1 00:44:44.819000 audit: BPF prog-id=196 op=UNLOAD Nov 1 00:44:44.819000 audit: BPF prog-id=195 op=UNLOAD Nov 1 00:44:44.819000 audit[5514]: AVC avc: denied { bpf } for pid=5514 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.819000 audit[5514]: AVC avc: denied { bpf } for pid=5514 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.819000 audit[5514]: AVC avc: denied { bpf } for pid=5514 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.819000 audit[5514]: AVC avc: denied { perfmon } for pid=5514 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.819000 audit[5514]: AVC avc: denied { perfmon } for pid=5514 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.819000 audit[5514]: AVC avc: denied { perfmon } for pid=5514 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.819000 audit[5514]: AVC avc: denied { perfmon } for pid=5514 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.819000 audit[5514]: AVC avc: denied { perfmon } for pid=5514 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.819000 audit[5514]: AVC avc: denied { bpf } for pid=5514 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.819000 audit[5514]: AVC avc: denied { bpf } for pid=5514 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:44.819000 audit: BPF prog-id=197 op=LOAD Nov 1 00:44:44.819000 audit[5514]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c000231f68 items=0 ppid=5505 pid=5514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:44.819000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337633937313930393737633361343162626232346239346139336131 Nov 1 00:44:44.836451 env[1565]: time="2025-11-01T00:44:44.836421207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d69b6c6c-s7kdv,Uid:2f6bb8ca-d7c7-4d64-919c-e85097fdc068,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"37c97190977c3a41bbb24b94a93a1cc29ed7afe833fe226759a203225b1077d8\"" Nov 1 00:44:44.837184 env[1565]: time="2025-11-01T00:44:44.837168601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:44:45.192257 env[1565]: time="2025-11-01T00:44:45.192109206Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:45.193410 env[1565]: time="2025-11-01T00:44:45.193244363Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:44:45.193836 kubelet[2501]: E1101 00:44:45.193723 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:44:45.193836 kubelet[2501]: E1101 00:44:45.193819 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:44:45.194749 kubelet[2501]: E1101 00:44:45.194108 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8pvv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d69b6c6c-s7kdv_calico-apiserver(2f6bb8ca-d7c7-4d64-919c-e85097fdc068): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:45.195557 kubelet[2501]: E1101 00:44:45.195408 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-s7kdv" podUID="2f6bb8ca-d7c7-4d64-919c-e85097fdc068" Nov 1 00:44:45.438799 systemd-networkd[1324]: calia9db2ac3380: Gained IPv6LL Nov 1 00:44:45.691933 kubelet[2501]: E1101 00:44:45.691842 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-vws84" podUID="af4cf953-58b5-4727-a1f5-dcd340748032" Nov 1 00:44:45.691933 kubelet[2501]: E1101 00:44:45.691844 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-s7kdv" podUID="2f6bb8ca-d7c7-4d64-919c-e85097fdc068" Nov 1 00:44:45.711151 kubelet[2501]: I1101 00:44:45.711012 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4lsqg" podStartSLOduration=38.710967867 podStartE2EDuration="38.710967867s" podCreationTimestamp="2025-11-01 00:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:44:45.710748834 +0000 UTC m=+45.235678948" watchObservedRunningTime="2025-11-01 00:44:45.710967867 +0000 UTC m=+45.235897927" Nov 1 00:44:45.734000 audit[5539]: NETFILTER_CFG table=filter:122 family=2 entries=14 op=nft_register_rule pid=5539 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:45.759648 systemd-networkd[1324]: cali10852593721: Gained IPv6LL Nov 1 00:44:45.763971 kernel: kauditd_printk_skb: 548 callbacks suppressed Nov 1 00:44:45.764032 kernel: audit: type=1325 audit(1761957885.734:1302): table=filter:122 family=2 entries=14 op=nft_register_rule pid=5539 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:45.734000 audit[5539]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffda2173350 a2=0 a3=7ffda217333c items=0 ppid=2683 pid=5539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:45.815592 kernel: audit: type=1300 audit(1761957885.734:1302): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffda2173350 a2=0 a3=7ffda217333c items=0 ppid=2683 pid=5539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:45.734000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:45.902570 kernel: audit: type=1327 audit(1761957885.734:1302): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:45.903000 audit[5539]: NETFILTER_CFG table=nat:123 family=2 entries=44 op=nft_register_rule pid=5539 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:46.003299 kernel: audit: type=1325 audit(1761957885.903:1303): table=nat:123 family=2 entries=44 op=nft_register_rule pid=5539 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:46.003333 kernel: audit: type=1300 audit(1761957885.903:1303): arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffda2173350 a2=0 a3=7ffda217333c items=0 ppid=2683 pid=5539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:45.903000 audit[5539]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffda2173350 a2=0 a3=7ffda217333c items=0 ppid=2683 pid=5539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:46.090076 kernel: audit: type=1327 audit(1761957885.903:1303): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:45.903000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:46.654914 systemd-networkd[1324]: cali7f78c61e7cd: Gained IPv6LL Nov 1 00:44:46.696194 kubelet[2501]: E1101 00:44:46.696087 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-s7kdv" podUID="2f6bb8ca-d7c7-4d64-919c-e85097fdc068" Nov 1 00:44:47.165000 audit[5548]: NETFILTER_CFG table=filter:124 family=2 entries=14 op=nft_register_rule pid=5548 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:47.165000 audit[5548]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd3bde7ba0 a2=0 a3=7ffd3bde7b8c items=0 ppid=2683 pid=5548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:47.314955 kernel: audit: type=1325 audit(1761957887.165:1304): table=filter:124 family=2 entries=14 op=nft_register_rule pid=5548 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:47.315022 kernel: audit: type=1300 audit(1761957887.165:1304): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd3bde7ba0 a2=0 a3=7ffd3bde7b8c items=0 ppid=2683 pid=5548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:47.315052 kernel: audit: type=1327 audit(1761957887.165:1304): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:47.165000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:47.382000 audit[5548]: NETFILTER_CFG table=nat:125 family=2 entries=56 op=nft_register_chain pid=5548 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:47.382000 audit[5548]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffd3bde7ba0 a2=0 a3=7ffd3bde7b8c items=0 ppid=2683 pid=5548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:47.437504 kernel: audit: type=1325 audit(1761957887.382:1305): table=nat:125 family=2 entries=56 op=nft_register_chain pid=5548 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:47.382000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:52.532014 env[1565]: time="2025-11-01T00:44:52.531810645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:44:52.912339 env[1565]: time="2025-11-01T00:44:52.912184202Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:52.929039 env[1565]: time="2025-11-01T00:44:52.928857500Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:44:52.929427 kubelet[2501]: E1101 00:44:52.929338 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:44:52.930249 kubelet[2501]: E1101 00:44:52.929445 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:44:52.930249 kubelet[2501]: E1101 00:44:52.929707 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:03c5ae372c8f45a08ebfcb08205f19e3,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-slg5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84c874fc74-qqjt8_calico-system(30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:52.932928 env[1565]: time="2025-11-01T00:44:52.932839108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:44:53.317092 env[1565]: time="2025-11-01T00:44:53.316979450Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:53.318040 env[1565]: time="2025-11-01T00:44:53.317930430Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:44:53.318416 kubelet[2501]: E1101 00:44:53.318338 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:44:53.318650 kubelet[2501]: E1101 00:44:53.318438 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:44:53.318886 kubelet[2501]: E1101 00:44:53.318751 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-slg5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84c874fc74-qqjt8_calico-system(30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:53.320230 kubelet[2501]: E1101 00:44:53.320090 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c874fc74-qqjt8" podUID="30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d" Nov 1 00:44:54.531530 env[1565]: time="2025-11-01T00:44:54.531406952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:44:54.893000 env[1565]: time="2025-11-01T00:44:54.892751006Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:54.893607 env[1565]: time="2025-11-01T00:44:54.893460992Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:44:54.893987 kubelet[2501]: E1101 00:44:54.893863 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:44:54.893987 kubelet[2501]: E1101 00:44:54.893963 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:44:54.895017 kubelet[2501]: E1101 00:44:54.894424 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p47qt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-859bf66984-8h8hn_calico-system(9cd09edd-44db-4b31-b369-b622badfedc3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:54.895553 env[1565]: time="2025-11-01T00:44:54.894705649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:44:54.895972 kubelet[2501]: E1101 00:44:54.895861 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-859bf66984-8h8hn" podUID="9cd09edd-44db-4b31-b369-b622badfedc3" Nov 1 00:44:55.277817 env[1565]: time="2025-11-01T00:44:55.277669745Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:55.278643 env[1565]: time="2025-11-01T00:44:55.278474926Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:44:55.279030 kubelet[2501]: E1101 00:44:55.278904 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:44:55.279030 kubelet[2501]: E1101 00:44:55.279004 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:44:55.279476 kubelet[2501]: E1101 00:44:55.279330 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vfh6h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-bxfpm_calico-system(135646f8-0c66-45b5-80ce-9bb45c825de7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:55.280782 kubelet[2501]: E1101 00:44:55.280709 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxfpm" podUID="135646f8-0c66-45b5-80ce-9bb45c825de7" Nov 1 00:44:56.531369 env[1565]: time="2025-11-01T00:44:56.531277991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:44:56.861000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sdb9" ino=520987 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:44:56.886573 kernel: kauditd_printk_skb: 2 callbacks suppressed Nov 1 00:44:56.886650 kernel: audit: type=1400 audit(1761957896.861:1306): avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sdb9" ino=520987 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:44:56.891546 env[1565]: time="2025-11-01T00:44:56.891470603Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:56.891921 env[1565]: time="2025-11-01T00:44:56.891880111Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:44:56.892044 kubelet[2501]: E1101 00:44:56.891999 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:44:56.892044 kubelet[2501]: E1101 00:44:56.892034 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:44:56.892258 kubelet[2501]: E1101 00:44:56.892188 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-48hx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d69b6c6c-vws84_calico-apiserver(af4cf953-58b5-4727-a1f5-dcd340748032): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:56.893401 kubelet[2501]: E1101 00:44:56.893355 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-vws84" podUID="af4cf953-58b5-4727-a1f5-dcd340748032" Nov 1 00:44:56.861000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:44:57.053941 kernel: audit: type=1400 audit(1761957896.861:1307): avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:44:57.053983 kernel: audit: type=1300 audit(1761957896.861:1306): arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0013dc480 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:44:56.861000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0013dc480 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:44:57.163379 kernel: audit: type=1300 audit(1761957896.861:1307): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c000c1f280 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:44:56.861000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c000c1f280 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:44:57.272931 kernel: audit: type=1327 audit(1761957896.861:1306): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:44:56.861000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:44:57.359963 kernel: audit: type=1327 audit(1761957896.861:1307): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:44:56.861000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:44:57.449883 kernel: audit: type=1400 audit(1761957896.945:1308): avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:44:56.945000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:44:57.529480 env[1565]: time="2025-11-01T00:44:57.529458746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:44:57.539574 kernel: audit: type=1300 audit(1761957896.945:1308): arch=c000003e syscall=254 success=no exit=-13 a0=6f a1=c010caf140 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:44:56.945000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6f a1=c010caf140 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:44:57.638315 kernel: audit: type=1327 audit(1761957896.945:1308): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:44:56.945000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:44:57.731924 kernel: audit: type=1400 audit(1761957896.946:1309): avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="sdb9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:44:56.946000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="sdb9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:44:56.946000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6f a1=c00fdde8d0 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:44:56.946000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:44:56.951000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sdb9" ino=520987 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:44:56.951000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6f a1=c00bd573b0 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:44:56.951000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:44:56.951000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sdb9" ino=520987 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:44:56.951000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6f a1=c00bd57410 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:44:56.951000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:44:56.951000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:44:56.951000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=70 a1=c00fe50e00 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:44:56.951000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:44:56.951000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="sdb9" ino=520989 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:44:56.951000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6f a1=c00bd574d0 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:44:56.951000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:44:57.885433 env[1565]: time="2025-11-01T00:44:57.885374128Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:57.885930 env[1565]: time="2025-11-01T00:44:57.885874368Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:44:57.886038 kubelet[2501]: E1101 00:44:57.885976 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:44:57.886038 kubelet[2501]: E1101 00:44:57.886004 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:44:57.886126 kubelet[2501]: E1101 00:44:57.886076 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l4wc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qpkjg_calico-system(ce15fc81-d33c-45b3-b08a-5d312fb076f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:57.887803 env[1565]: time="2025-11-01T00:44:57.887756478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:44:58.245955 env[1565]: time="2025-11-01T00:44:58.245895033Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:58.246444 env[1565]: time="2025-11-01T00:44:58.246417248Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:44:58.246575 kubelet[2501]: E1101 00:44:58.246556 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:44:58.246746 kubelet[2501]: E1101 00:44:58.246583 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:44:58.246746 kubelet[2501]: E1101 00:44:58.246652 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l4wc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qpkjg_calico-system(ce15fc81-d33c-45b3-b08a-5d312fb076f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:58.247781 kubelet[2501]: E1101 00:44:58.247764 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpkjg" podUID="ce15fc81-d33c-45b3-b08a-5d312fb076f0" Nov 1 00:44:59.532183 env[1565]: time="2025-11-01T00:44:59.532053847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:44:59.889059 env[1565]: time="2025-11-01T00:44:59.888806958Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:59.890043 env[1565]: time="2025-11-01T00:44:59.889888886Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:44:59.890457 kubelet[2501]: E1101 00:44:59.890371 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:44:59.891176 kubelet[2501]: E1101 00:44:59.890483 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:44:59.891176 kubelet[2501]: E1101 00:44:59.890796 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8pvv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d69b6c6c-s7kdv_calico-apiserver(2f6bb8ca-d7c7-4d64-919c-e85097fdc068): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:59.892268 kubelet[2501]: E1101 00:44:59.892138 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-s7kdv" podUID="2f6bb8ca-d7c7-4d64-919c-e85097fdc068" Nov 1 00:45:00.527064 env[1565]: time="2025-11-01T00:45:00.527022005Z" level=info msg="StopPodSandbox for \"9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f\"" Nov 1 00:45:00.564141 env[1565]: 2025-11-01 00:45:00.545 [WARNING][5570] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--4lsqg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"027d8f2f-3806-4651-bd8a-463d116e5266", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 44, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-3bc793b712", ContainerID:"782bd3ee2d968568e244ceeb0d2d0c14242674f0eea18780a93582d2e2fa193d", Pod:"coredns-668d6bf9bc-4lsqg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali10852593721", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:45:00.564141 env[1565]: 2025-11-01 00:45:00.545 [INFO][5570] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f" Nov 1 00:45:00.564141 env[1565]: 2025-11-01 00:45:00.545 [INFO][5570] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f" iface="eth0" netns="" Nov 1 00:45:00.564141 env[1565]: 2025-11-01 00:45:00.545 [INFO][5570] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f" Nov 1 00:45:00.564141 env[1565]: 2025-11-01 00:45:00.545 [INFO][5570] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f" Nov 1 00:45:00.564141 env[1565]: 2025-11-01 00:45:00.556 [INFO][5588] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f" HandleID="k8s-pod-network.9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f" Workload="ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--4lsqg-eth0" Nov 1 00:45:00.564141 env[1565]: 2025-11-01 00:45:00.556 [INFO][5588] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:45:00.564141 env[1565]: 2025-11-01 00:45:00.556 [INFO][5588] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:45:00.564141 env[1565]: 2025-11-01 00:45:00.561 [WARNING][5588] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f" HandleID="k8s-pod-network.9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f" Workload="ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--4lsqg-eth0" Nov 1 00:45:00.564141 env[1565]: 2025-11-01 00:45:00.561 [INFO][5588] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f" HandleID="k8s-pod-network.9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f" Workload="ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--4lsqg-eth0" Nov 1 00:45:00.564141 env[1565]: 2025-11-01 00:45:00.562 [INFO][5588] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:45:00.564141 env[1565]: 2025-11-01 00:45:00.563 [INFO][5570] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f" Nov 1 00:45:00.564680 env[1565]: time="2025-11-01T00:45:00.564158838Z" level=info msg="TearDown network for sandbox \"9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f\" successfully" Nov 1 00:45:00.564680 env[1565]: time="2025-11-01T00:45:00.564179268Z" level=info msg="StopPodSandbox for \"9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f\" returns successfully" Nov 1 00:45:00.564680 env[1565]: time="2025-11-01T00:45:00.564500519Z" level=info msg="RemovePodSandbox for \"9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f\"" Nov 1 00:45:00.564680 env[1565]: time="2025-11-01T00:45:00.564519874Z" level=info msg="Forcibly stopping sandbox \"9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f\"" Nov 1 00:45:00.605780 env[1565]: 2025-11-01 00:45:00.585 [WARNING][5612] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--4lsqg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"027d8f2f-3806-4651-bd8a-463d116e5266", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 44, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-3bc793b712", ContainerID:"782bd3ee2d968568e244ceeb0d2d0c14242674f0eea18780a93582d2e2fa193d", Pod:"coredns-668d6bf9bc-4lsqg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali10852593721", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:45:00.605780 env[1565]: 2025-11-01 00:45:00.585 [INFO][5612] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f" Nov 1 00:45:00.605780 env[1565]: 2025-11-01 00:45:00.585 [INFO][5612] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f" iface="eth0" netns="" Nov 1 00:45:00.605780 env[1565]: 2025-11-01 00:45:00.585 [INFO][5612] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f" Nov 1 00:45:00.605780 env[1565]: 2025-11-01 00:45:00.585 [INFO][5612] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f" Nov 1 00:45:00.605780 env[1565]: 2025-11-01 00:45:00.597 [INFO][5627] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f" HandleID="k8s-pod-network.9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f" Workload="ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--4lsqg-eth0" Nov 1 00:45:00.605780 env[1565]: 2025-11-01 00:45:00.597 [INFO][5627] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:45:00.605780 env[1565]: 2025-11-01 00:45:00.597 [INFO][5627] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:45:00.605780 env[1565]: 2025-11-01 00:45:00.602 [WARNING][5627] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f" HandleID="k8s-pod-network.9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f" Workload="ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--4lsqg-eth0" Nov 1 00:45:00.605780 env[1565]: 2025-11-01 00:45:00.602 [INFO][5627] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f" HandleID="k8s-pod-network.9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f" Workload="ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--4lsqg-eth0" Nov 1 00:45:00.605780 env[1565]: 2025-11-01 00:45:00.603 [INFO][5627] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:45:00.605780 env[1565]: 2025-11-01 00:45:00.604 [INFO][5612] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f" Nov 1 00:45:00.606199 env[1565]: time="2025-11-01T00:45:00.605799155Z" level=info msg="TearDown network for sandbox \"9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f\" successfully" Nov 1 00:45:00.607468 env[1565]: time="2025-11-01T00:45:00.607450901Z" level=info msg="RemovePodSandbox \"9012e34f29a5d6c58ce446fbb061a54917e43571bacfdd14fbce924d22bda50f\" returns successfully" Nov 1 00:45:00.607826 env[1565]: time="2025-11-01T00:45:00.607811813Z" level=info msg="StopPodSandbox for \"f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0\"" Nov 1 00:45:00.651631 env[1565]: 2025-11-01 00:45:00.630 [WARNING][5654] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--3bc793b712-k8s-goldmane--666569f655--bxfpm-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"135646f8-0c66-45b5-80ce-9bb45c825de7", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 44, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-3bc793b712", ContainerID:"f59ce6e00a8c9808b794b01fd1ffb6e1c5a2779e073f0a61a83cb5d80e2850ea", Pod:"goldmane-666569f655-bxfpm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.3.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8050522b12a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:45:00.651631 env[1565]: 2025-11-01 00:45:00.630 [INFO][5654] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0" Nov 1 00:45:00.651631 env[1565]: 2025-11-01 00:45:00.630 [INFO][5654] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0" iface="eth0" netns="" Nov 1 00:45:00.651631 env[1565]: 2025-11-01 00:45:00.630 [INFO][5654] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0" Nov 1 00:45:00.651631 env[1565]: 2025-11-01 00:45:00.630 [INFO][5654] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0" Nov 1 00:45:00.651631 env[1565]: 2025-11-01 00:45:00.642 [INFO][5671] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0" HandleID="k8s-pod-network.f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0" Workload="ci--3510.3.8--n--3bc793b712-k8s-goldmane--666569f655--bxfpm-eth0" Nov 1 00:45:00.651631 env[1565]: 2025-11-01 00:45:00.643 [INFO][5671] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:45:00.651631 env[1565]: 2025-11-01 00:45:00.643 [INFO][5671] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:45:00.651631 env[1565]: 2025-11-01 00:45:00.648 [WARNING][5671] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0" HandleID="k8s-pod-network.f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0" Workload="ci--3510.3.8--n--3bc793b712-k8s-goldmane--666569f655--bxfpm-eth0" Nov 1 00:45:00.651631 env[1565]: 2025-11-01 00:45:00.648 [INFO][5671] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0" HandleID="k8s-pod-network.f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0" Workload="ci--3510.3.8--n--3bc793b712-k8s-goldmane--666569f655--bxfpm-eth0" Nov 1 00:45:00.651631 env[1565]: 2025-11-01 00:45:00.649 [INFO][5671] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:45:00.651631 env[1565]: 2025-11-01 00:45:00.650 [INFO][5654] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0" Nov 1 00:45:00.652081 env[1565]: time="2025-11-01T00:45:00.651623310Z" level=info msg="TearDown network for sandbox \"f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0\" successfully" Nov 1 00:45:00.652081 env[1565]: time="2025-11-01T00:45:00.651652744Z" level=info msg="StopPodSandbox for \"f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0\" returns successfully" Nov 1 00:45:00.652081 env[1565]: time="2025-11-01T00:45:00.651938578Z" level=info msg="RemovePodSandbox for \"f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0\"" Nov 1 00:45:00.652081 env[1565]: time="2025-11-01T00:45:00.651963743Z" level=info msg="Forcibly stopping sandbox \"f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0\"" Nov 1 00:45:00.742154 env[1565]: 2025-11-01 00:45:00.675 [WARNING][5695] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--3bc793b712-k8s-goldmane--666569f655--bxfpm-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"135646f8-0c66-45b5-80ce-9bb45c825de7", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 44, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-3bc793b712", ContainerID:"f59ce6e00a8c9808b794b01fd1ffb6e1c5a2779e073f0a61a83cb5d80e2850ea", Pod:"goldmane-666569f655-bxfpm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.3.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8050522b12a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:45:00.742154 env[1565]: 2025-11-01 00:45:00.676 [INFO][5695] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0" Nov 1 00:45:00.742154 env[1565]: 2025-11-01 00:45:00.676 [INFO][5695] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0" iface="eth0" netns="" Nov 1 00:45:00.742154 env[1565]: 2025-11-01 00:45:00.676 [INFO][5695] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0" Nov 1 00:45:00.742154 env[1565]: 2025-11-01 00:45:00.676 [INFO][5695] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0" Nov 1 00:45:00.742154 env[1565]: 2025-11-01 00:45:00.724 [INFO][5710] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0" HandleID="k8s-pod-network.f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0" Workload="ci--3510.3.8--n--3bc793b712-k8s-goldmane--666569f655--bxfpm-eth0" Nov 1 00:45:00.742154 env[1565]: 2025-11-01 00:45:00.725 [INFO][5710] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:45:00.742154 env[1565]: 2025-11-01 00:45:00.725 [INFO][5710] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:45:00.742154 env[1565]: 2025-11-01 00:45:00.735 [WARNING][5710] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0" HandleID="k8s-pod-network.f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0" Workload="ci--3510.3.8--n--3bc793b712-k8s-goldmane--666569f655--bxfpm-eth0" Nov 1 00:45:00.742154 env[1565]: 2025-11-01 00:45:00.735 [INFO][5710] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0" HandleID="k8s-pod-network.f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0" Workload="ci--3510.3.8--n--3bc793b712-k8s-goldmane--666569f655--bxfpm-eth0" Nov 1 00:45:00.742154 env[1565]: 2025-11-01 00:45:00.737 [INFO][5710] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:45:00.742154 env[1565]: 2025-11-01 00:45:00.740 [INFO][5695] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0" Nov 1 00:45:00.743199 env[1565]: time="2025-11-01T00:45:00.742177581Z" level=info msg="TearDown network for sandbox \"f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0\" successfully" Nov 1 00:45:00.745966 env[1565]: time="2025-11-01T00:45:00.745916636Z" level=info msg="RemovePodSandbox \"f3c3179f96fb94372f4d2d518df2177a781351cce54b100c69187f4b89b485f0\" returns successfully" Nov 1 00:45:00.746470 env[1565]: time="2025-11-01T00:45:00.746429527Z" level=info msg="StopPodSandbox for \"3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c\"" Nov 1 00:45:00.847417 env[1565]: 2025-11-01 00:45:00.800 [WARNING][5739] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--3bc793b712-k8s-calico--kube--controllers--859bf66984--8h8hn-eth0", GenerateName:"calico-kube-controllers-859bf66984-", Namespace:"calico-system", SelfLink:"", UID:"9cd09edd-44db-4b31-b369-b622badfedc3", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 44, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"859bf66984", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-3bc793b712", ContainerID:"49aa6e2f8bd03af2294a801ea17da1a9eca3955df407e8f03484dcae85cdbc37", Pod:"calico-kube-controllers-859bf66984-8h8hn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.3.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid098c8d73a2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:45:00.847417 env[1565]: 2025-11-01 00:45:00.800 [INFO][5739] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c" Nov 1 00:45:00.847417 env[1565]: 2025-11-01 00:45:00.800 [INFO][5739] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c" iface="eth0" netns="" Nov 1 00:45:00.847417 env[1565]: 2025-11-01 00:45:00.800 [INFO][5739] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c" Nov 1 00:45:00.847417 env[1565]: 2025-11-01 00:45:00.800 [INFO][5739] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c" Nov 1 00:45:00.847417 env[1565]: 2025-11-01 00:45:00.833 [INFO][5757] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c" HandleID="k8s-pod-network.3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c" Workload="ci--3510.3.8--n--3bc793b712-k8s-calico--kube--controllers--859bf66984--8h8hn-eth0" Nov 1 00:45:00.847417 env[1565]: 2025-11-01 00:45:00.833 [INFO][5757] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:45:00.847417 env[1565]: 2025-11-01 00:45:00.833 [INFO][5757] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:45:00.847417 env[1565]: 2025-11-01 00:45:00.841 [WARNING][5757] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c" HandleID="k8s-pod-network.3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c" Workload="ci--3510.3.8--n--3bc793b712-k8s-calico--kube--controllers--859bf66984--8h8hn-eth0" Nov 1 00:45:00.847417 env[1565]: 2025-11-01 00:45:00.841 [INFO][5757] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c" HandleID="k8s-pod-network.3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c" Workload="ci--3510.3.8--n--3bc793b712-k8s-calico--kube--controllers--859bf66984--8h8hn-eth0" Nov 1 00:45:00.847417 env[1565]: 2025-11-01 00:45:00.843 [INFO][5757] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:45:00.847417 env[1565]: 2025-11-01 00:45:00.845 [INFO][5739] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c" Nov 1 00:45:00.847417 env[1565]: time="2025-11-01T00:45:00.847392901Z" level=info msg="TearDown network for sandbox \"3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c\" successfully" Nov 1 00:45:00.848378 env[1565]: time="2025-11-01T00:45:00.847434057Z" level=info msg="StopPodSandbox for \"3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c\" returns successfully" Nov 1 00:45:00.848378 env[1565]: time="2025-11-01T00:45:00.848001685Z" level=info msg="RemovePodSandbox for \"3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c\"" Nov 1 00:45:00.848378 env[1565]: time="2025-11-01T00:45:00.848056292Z" level=info msg="Forcibly stopping sandbox \"3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c\"" Nov 1 00:45:00.929380 env[1565]: 2025-11-01 00:45:00.889 [WARNING][5784] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--3bc793b712-k8s-calico--kube--controllers--859bf66984--8h8hn-eth0", GenerateName:"calico-kube-controllers-859bf66984-", Namespace:"calico-system", SelfLink:"", UID:"9cd09edd-44db-4b31-b369-b622badfedc3", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 44, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"859bf66984", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-3bc793b712", ContainerID:"49aa6e2f8bd03af2294a801ea17da1a9eca3955df407e8f03484dcae85cdbc37", Pod:"calico-kube-controllers-859bf66984-8h8hn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.3.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid098c8d73a2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:45:00.929380 env[1565]: 2025-11-01 00:45:00.890 [INFO][5784] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c" Nov 1 00:45:00.929380 env[1565]: 2025-11-01 00:45:00.890 [INFO][5784] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c" iface="eth0" netns="" Nov 1 00:45:00.929380 env[1565]: 2025-11-01 00:45:00.890 [INFO][5784] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c" Nov 1 00:45:00.929380 env[1565]: 2025-11-01 00:45:00.890 [INFO][5784] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c" Nov 1 00:45:00.929380 env[1565]: 2025-11-01 00:45:00.915 [INFO][5801] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c" HandleID="k8s-pod-network.3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c" Workload="ci--3510.3.8--n--3bc793b712-k8s-calico--kube--controllers--859bf66984--8h8hn-eth0" Nov 1 00:45:00.929380 env[1565]: 2025-11-01 00:45:00.915 [INFO][5801] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:45:00.929380 env[1565]: 2025-11-01 00:45:00.915 [INFO][5801] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:45:00.929380 env[1565]: 2025-11-01 00:45:00.923 [WARNING][5801] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c" HandleID="k8s-pod-network.3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c" Workload="ci--3510.3.8--n--3bc793b712-k8s-calico--kube--controllers--859bf66984--8h8hn-eth0" Nov 1 00:45:00.929380 env[1565]: 2025-11-01 00:45:00.924 [INFO][5801] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c" HandleID="k8s-pod-network.3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c" Workload="ci--3510.3.8--n--3bc793b712-k8s-calico--kube--controllers--859bf66984--8h8hn-eth0" Nov 1 00:45:00.929380 env[1565]: 2025-11-01 00:45:00.925 [INFO][5801] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:45:00.929380 env[1565]: 2025-11-01 00:45:00.927 [INFO][5784] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c" Nov 1 00:45:00.930214 env[1565]: time="2025-11-01T00:45:00.929409983Z" level=info msg="TearDown network for sandbox \"3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c\" successfully" Nov 1 00:45:00.932166 env[1565]: time="2025-11-01T00:45:00.932133759Z" level=info msg="RemovePodSandbox \"3e731a77e09ac0c8df95412ce0b74e992c0540b2fb5d6ba660c935ddb252610c\" returns successfully" Nov 1 00:45:00.932660 env[1565]: time="2025-11-01T00:45:00.932623112Z" level=info msg="StopPodSandbox for \"3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516\"" Nov 1 00:45:01.013322 env[1565]: 2025-11-01 00:45:00.973 [WARNING][5826] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--s7kdv-eth0", GenerateName:"calico-apiserver-5d69b6c6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"2f6bb8ca-d7c7-4d64-919c-e85097fdc068", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 44, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d69b6c6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-3bc793b712", ContainerID:"37c97190977c3a41bbb24b94a93a1cc29ed7afe833fe226759a203225b1077d8", Pod:"calico-apiserver-5d69b6c6c-s7kdv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7f78c61e7cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:45:01.013322 env[1565]: 2025-11-01 00:45:00.973 [INFO][5826] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516" Nov 1 00:45:01.013322 env[1565]: 2025-11-01 00:45:00.973 [INFO][5826] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516" iface="eth0" netns="" Nov 1 00:45:01.013322 env[1565]: 2025-11-01 00:45:00.973 [INFO][5826] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516" Nov 1 00:45:01.013322 env[1565]: 2025-11-01 00:45:00.973 [INFO][5826] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516" Nov 1 00:45:01.013322 env[1565]: 2025-11-01 00:45:00.999 [INFO][5845] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516" HandleID="k8s-pod-network.3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516" Workload="ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--s7kdv-eth0" Nov 1 00:45:01.013322 env[1565]: 2025-11-01 00:45:00.999 [INFO][5845] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:45:01.013322 env[1565]: 2025-11-01 00:45:00.999 [INFO][5845] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:45:01.013322 env[1565]: 2025-11-01 00:45:01.007 [WARNING][5845] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516" HandleID="k8s-pod-network.3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516" Workload="ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--s7kdv-eth0" Nov 1 00:45:01.013322 env[1565]: 2025-11-01 00:45:01.007 [INFO][5845] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516" HandleID="k8s-pod-network.3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516" Workload="ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--s7kdv-eth0" Nov 1 00:45:01.013322 env[1565]: 2025-11-01 00:45:01.009 [INFO][5845] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:45:01.013322 env[1565]: 2025-11-01 00:45:01.011 [INFO][5826] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516" Nov 1 00:45:01.014262 env[1565]: time="2025-11-01T00:45:01.013351705Z" level=info msg="TearDown network for sandbox \"3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516\" successfully" Nov 1 00:45:01.014262 env[1565]: time="2025-11-01T00:45:01.013392192Z" level=info msg="StopPodSandbox for \"3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516\" returns successfully" Nov 1 00:45:01.014262 env[1565]: time="2025-11-01T00:45:01.013881258Z" level=info msg="RemovePodSandbox for \"3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516\"" Nov 1 00:45:01.014262 env[1565]: time="2025-11-01T00:45:01.013925884Z" level=info msg="Forcibly stopping sandbox \"3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516\"" Nov 1 00:45:01.094075 env[1565]: 2025-11-01 00:45:01.055 [WARNING][5870] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--s7kdv-eth0", GenerateName:"calico-apiserver-5d69b6c6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"2f6bb8ca-d7c7-4d64-919c-e85097fdc068", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 44, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d69b6c6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-3bc793b712", ContainerID:"37c97190977c3a41bbb24b94a93a1cc29ed7afe833fe226759a203225b1077d8", Pod:"calico-apiserver-5d69b6c6c-s7kdv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7f78c61e7cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:45:01.094075 env[1565]: 2025-11-01 00:45:01.055 [INFO][5870] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516" Nov 1 00:45:01.094075 env[1565]: 2025-11-01 00:45:01.055 [INFO][5870] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516" iface="eth0" netns="" Nov 1 00:45:01.094075 env[1565]: 2025-11-01 00:45:01.055 [INFO][5870] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516" Nov 1 00:45:01.094075 env[1565]: 2025-11-01 00:45:01.055 [INFO][5870] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516" Nov 1 00:45:01.094075 env[1565]: 2025-11-01 00:45:01.080 [INFO][5888] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516" HandleID="k8s-pod-network.3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516" Workload="ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--s7kdv-eth0" Nov 1 00:45:01.094075 env[1565]: 2025-11-01 00:45:01.080 [INFO][5888] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:45:01.094075 env[1565]: 2025-11-01 00:45:01.080 [INFO][5888] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:45:01.094075 env[1565]: 2025-11-01 00:45:01.088 [WARNING][5888] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516" HandleID="k8s-pod-network.3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516" Workload="ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--s7kdv-eth0" Nov 1 00:45:01.094075 env[1565]: 2025-11-01 00:45:01.088 [INFO][5888] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516" HandleID="k8s-pod-network.3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516" Workload="ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--s7kdv-eth0" Nov 1 00:45:01.094075 env[1565]: 2025-11-01 00:45:01.090 [INFO][5888] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:45:01.094075 env[1565]: 2025-11-01 00:45:01.092 [INFO][5870] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516" Nov 1 00:45:01.094908 env[1565]: time="2025-11-01T00:45:01.094083969Z" level=info msg="TearDown network for sandbox \"3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516\" successfully" Nov 1 00:45:01.096865 env[1565]: time="2025-11-01T00:45:01.096828837Z" level=info msg="RemovePodSandbox \"3b2462e74853ce26f7bf815bac051c874fce42806027fa6357e30e16070af516\" returns successfully" Nov 1 00:45:01.097403 env[1565]: time="2025-11-01T00:45:01.097368459Z" level=info msg="StopPodSandbox for \"33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f\"" Nov 1 00:45:01.171636 env[1565]: 2025-11-01 00:45:01.140 [WARNING][5912] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-whisker--6897698c76--c6xgv-eth0" Nov 1 00:45:01.171636 env[1565]: 2025-11-01 00:45:01.140 [INFO][5912] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f" Nov 1 00:45:01.171636 env[1565]: 2025-11-01 00:45:01.140 [INFO][5912] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f" iface="eth0" netns="" Nov 1 00:45:01.171636 env[1565]: 2025-11-01 00:45:01.140 [INFO][5912] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f" Nov 1 00:45:01.171636 env[1565]: 2025-11-01 00:45:01.140 [INFO][5912] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f" Nov 1 00:45:01.171636 env[1565]: 2025-11-01 00:45:01.159 [INFO][5930] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f" HandleID="k8s-pod-network.33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f" Workload="ci--3510.3.8--n--3bc793b712-k8s-whisker--6897698c76--c6xgv-eth0" Nov 1 00:45:01.171636 env[1565]: 2025-11-01 00:45:01.159 [INFO][5930] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:45:01.171636 env[1565]: 2025-11-01 00:45:01.159 [INFO][5930] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:45:01.171636 env[1565]: 2025-11-01 00:45:01.166 [WARNING][5930] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f" HandleID="k8s-pod-network.33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f" Workload="ci--3510.3.8--n--3bc793b712-k8s-whisker--6897698c76--c6xgv-eth0" Nov 1 00:45:01.171636 env[1565]: 2025-11-01 00:45:01.166 [INFO][5930] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f" HandleID="k8s-pod-network.33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f" Workload="ci--3510.3.8--n--3bc793b712-k8s-whisker--6897698c76--c6xgv-eth0" Nov 1 00:45:01.171636 env[1565]: 2025-11-01 00:45:01.168 [INFO][5930] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:45:01.171636 env[1565]: 2025-11-01 00:45:01.170 [INFO][5912] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f" Nov 1 00:45:01.172528 env[1565]: time="2025-11-01T00:45:01.171666940Z" level=info msg="TearDown network for sandbox \"33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f\" successfully" Nov 1 00:45:01.172528 env[1565]: time="2025-11-01T00:45:01.171697975Z" level=info msg="StopPodSandbox for \"33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f\" returns successfully" Nov 1 00:45:01.172528 env[1565]: time="2025-11-01T00:45:01.172126992Z" level=info msg="RemovePodSandbox for \"33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f\"" Nov 1 00:45:01.172528 env[1565]: time="2025-11-01T00:45:01.172163872Z" level=info msg="Forcibly stopping sandbox \"33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f\"" Nov 1 00:45:01.234782 env[1565]: 2025-11-01 00:45:01.205 [WARNING][5954] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f" WorkloadEndpoint="ci--3510.3.8--n--3bc793b712-k8s-whisker--6897698c76--c6xgv-eth0" Nov 1 00:45:01.234782 env[1565]: 2025-11-01 00:45:01.205 [INFO][5954] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f" Nov 1 00:45:01.234782 env[1565]: 2025-11-01 00:45:01.205 [INFO][5954] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f" iface="eth0" netns="" Nov 1 00:45:01.234782 env[1565]: 2025-11-01 00:45:01.205 [INFO][5954] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f" Nov 1 00:45:01.234782 env[1565]: 2025-11-01 00:45:01.205 [INFO][5954] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f" Nov 1 00:45:01.234782 env[1565]: 2025-11-01 00:45:01.223 [INFO][5973] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f" HandleID="k8s-pod-network.33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f" Workload="ci--3510.3.8--n--3bc793b712-k8s-whisker--6897698c76--c6xgv-eth0" Nov 1 00:45:01.234782 env[1565]: 2025-11-01 00:45:01.223 [INFO][5973] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:45:01.234782 env[1565]: 2025-11-01 00:45:01.223 [INFO][5973] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:45:01.234782 env[1565]: 2025-11-01 00:45:01.230 [WARNING][5973] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f" HandleID="k8s-pod-network.33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f" Workload="ci--3510.3.8--n--3bc793b712-k8s-whisker--6897698c76--c6xgv-eth0" Nov 1 00:45:01.234782 env[1565]: 2025-11-01 00:45:01.230 [INFO][5973] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f" HandleID="k8s-pod-network.33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f" Workload="ci--3510.3.8--n--3bc793b712-k8s-whisker--6897698c76--c6xgv-eth0" Nov 1 00:45:01.234782 env[1565]: 2025-11-01 00:45:01.232 [INFO][5973] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:45:01.234782 env[1565]: 2025-11-01 00:45:01.233 [INFO][5954] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f" Nov 1 00:45:01.235331 env[1565]: time="2025-11-01T00:45:01.234810503Z" level=info msg="TearDown network for sandbox \"33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f\" successfully" Nov 1 00:45:01.237052 env[1565]: time="2025-11-01T00:45:01.236991950Z" level=info msg="RemovePodSandbox \"33c3406b6a04aadc406390df4074b99655b17540950ee01edaabf738a9ef9d4f\" returns successfully" Nov 1 00:45:01.237463 env[1565]: time="2025-11-01T00:45:01.237417436Z" level=info msg="StopPodSandbox for \"66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd\"" Nov 1 00:45:01.301209 env[1565]: 2025-11-01 00:45:01.270 [WARNING][5999] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--vws84-eth0", GenerateName:"calico-apiserver-5d69b6c6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"af4cf953-58b5-4727-a1f5-dcd340748032", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 44, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d69b6c6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-3bc793b712", ContainerID:"80090ac8eb677f7d08cd203f8dfcfafc867b629ab5a3827e7e17db8415cae347", Pod:"calico-apiserver-5d69b6c6c-vws84", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3e033247e8b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:45:01.301209 env[1565]: 2025-11-01 00:45:01.270 [INFO][5999] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd" Nov 1 00:45:01.301209 env[1565]: 2025-11-01 00:45:01.270 [INFO][5999] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd" iface="eth0" netns="" Nov 1 00:45:01.301209 env[1565]: 2025-11-01 00:45:01.270 [INFO][5999] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd" Nov 1 00:45:01.301209 env[1565]: 2025-11-01 00:45:01.270 [INFO][5999] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd" Nov 1 00:45:01.301209 env[1565]: 2025-11-01 00:45:01.290 [INFO][6017] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd" HandleID="k8s-pod-network.66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd" Workload="ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--vws84-eth0" Nov 1 00:45:01.301209 env[1565]: 2025-11-01 00:45:01.290 [INFO][6017] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:45:01.301209 env[1565]: 2025-11-01 00:45:01.290 [INFO][6017] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:45:01.301209 env[1565]: 2025-11-01 00:45:01.296 [WARNING][6017] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd" HandleID="k8s-pod-network.66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd" Workload="ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--vws84-eth0" Nov 1 00:45:01.301209 env[1565]: 2025-11-01 00:45:01.296 [INFO][6017] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd" HandleID="k8s-pod-network.66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd" Workload="ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--vws84-eth0" Nov 1 00:45:01.301209 env[1565]: 2025-11-01 00:45:01.298 [INFO][6017] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:45:01.301209 env[1565]: 2025-11-01 00:45:01.299 [INFO][5999] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd" Nov 1 00:45:01.302116 env[1565]: time="2025-11-01T00:45:01.301262718Z" level=info msg="TearDown network for sandbox \"66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd\" successfully" Nov 1 00:45:01.302116 env[1565]: time="2025-11-01T00:45:01.301303174Z" level=info msg="StopPodSandbox for \"66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd\" returns successfully" Nov 1 00:45:01.302611 env[1565]: time="2025-11-01T00:45:01.302559646Z" level=info msg="RemovePodSandbox for \"66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd\"" Nov 1 00:45:01.302758 env[1565]: time="2025-11-01T00:45:01.302685574Z" level=info msg="Forcibly stopping sandbox \"66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd\"" Nov 1 00:45:01.366343 env[1565]: 2025-11-01 00:45:01.334 [WARNING][6042] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--vws84-eth0", GenerateName:"calico-apiserver-5d69b6c6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"af4cf953-58b5-4727-a1f5-dcd340748032", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 44, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d69b6c6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-3bc793b712", ContainerID:"80090ac8eb677f7d08cd203f8dfcfafc867b629ab5a3827e7e17db8415cae347", Pod:"calico-apiserver-5d69b6c6c-vws84", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3e033247e8b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:45:01.366343 env[1565]: 2025-11-01 00:45:01.335 [INFO][6042] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd" Nov 1 00:45:01.366343 env[1565]: 2025-11-01 00:45:01.335 [INFO][6042] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd" iface="eth0" netns="" Nov 1 00:45:01.366343 env[1565]: 2025-11-01 00:45:01.335 [INFO][6042] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd" Nov 1 00:45:01.366343 env[1565]: 2025-11-01 00:45:01.335 [INFO][6042] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd" Nov 1 00:45:01.366343 env[1565]: 2025-11-01 00:45:01.355 [INFO][6060] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd" HandleID="k8s-pod-network.66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd" Workload="ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--vws84-eth0" Nov 1 00:45:01.366343 env[1565]: 2025-11-01 00:45:01.355 [INFO][6060] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:45:01.366343 env[1565]: 2025-11-01 00:45:01.355 [INFO][6060] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:45:01.366343 env[1565]: 2025-11-01 00:45:01.361 [WARNING][6060] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd" HandleID="k8s-pod-network.66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd" Workload="ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--vws84-eth0" Nov 1 00:45:01.366343 env[1565]: 2025-11-01 00:45:01.361 [INFO][6060] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd" HandleID="k8s-pod-network.66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd" Workload="ci--3510.3.8--n--3bc793b712-k8s-calico--apiserver--5d69b6c6c--vws84-eth0" Nov 1 00:45:01.366343 env[1565]: 2025-11-01 00:45:01.363 [INFO][6060] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:45:01.366343 env[1565]: 2025-11-01 00:45:01.364 [INFO][6042] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd" Nov 1 00:45:01.366969 env[1565]: time="2025-11-01T00:45:01.366362669Z" level=info msg="TearDown network for sandbox \"66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd\" successfully" Nov 1 00:45:01.368558 env[1565]: time="2025-11-01T00:45:01.368532360Z" level=info msg="RemovePodSandbox \"66122fdc0a6acc2b3e2d993a6844d8b4c544e2302c8a16b87016118e33704ffd\" returns successfully" Nov 1 00:45:01.368985 env[1565]: time="2025-11-01T00:45:01.368932308Z" level=info msg="StopPodSandbox for \"de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78\"" Nov 1 00:45:01.432594 env[1565]: 2025-11-01 00:45:01.400 [WARNING][6085] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--86gd2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2eed7e34-9d6a-4f7a-a712-82a7a599696d", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 44, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-3bc793b712", ContainerID:"b3f03cd150e69fcb73547d36c314a7c005b5233fa812b88df30d275ccc7dc855", Pod:"coredns-668d6bf9bc-86gd2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia9db2ac3380", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:45:01.432594 env[1565]: 2025-11-01 00:45:01.401 [INFO][6085] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78" Nov 1 00:45:01.432594 env[1565]: 2025-11-01 00:45:01.401 [INFO][6085] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78" iface="eth0" netns="" Nov 1 00:45:01.432594 env[1565]: 2025-11-01 00:45:01.401 [INFO][6085] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78" Nov 1 00:45:01.432594 env[1565]: 2025-11-01 00:45:01.401 [INFO][6085] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78" Nov 1 00:45:01.432594 env[1565]: 2025-11-01 00:45:01.421 [INFO][6104] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78" HandleID="k8s-pod-network.de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78" Workload="ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--86gd2-eth0" Nov 1 00:45:01.432594 env[1565]: 2025-11-01 00:45:01.421 [INFO][6104] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:45:01.432594 env[1565]: 2025-11-01 00:45:01.421 [INFO][6104] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:45:01.432594 env[1565]: 2025-11-01 00:45:01.428 [WARNING][6104] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78" HandleID="k8s-pod-network.de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78" Workload="ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--86gd2-eth0" Nov 1 00:45:01.432594 env[1565]: 2025-11-01 00:45:01.428 [INFO][6104] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78" HandleID="k8s-pod-network.de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78" Workload="ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--86gd2-eth0" Nov 1 00:45:01.432594 env[1565]: 2025-11-01 00:45:01.429 [INFO][6104] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:45:01.432594 env[1565]: 2025-11-01 00:45:01.431 [INFO][6085] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78" Nov 1 00:45:01.433515 env[1565]: time="2025-11-01T00:45:01.432580406Z" level=info msg="TearDown network for sandbox \"de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78\" successfully" Nov 1 00:45:01.433515 env[1565]: time="2025-11-01T00:45:01.432614102Z" level=info msg="StopPodSandbox for \"de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78\" returns successfully" Nov 1 00:45:01.433515 env[1565]: time="2025-11-01T00:45:01.433025271Z" level=info msg="RemovePodSandbox for \"de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78\"" Nov 1 00:45:01.433515 env[1565]: time="2025-11-01T00:45:01.433058226Z" level=info msg="Forcibly stopping sandbox \"de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78\"" Nov 1 00:45:01.495991 env[1565]: 2025-11-01 00:45:01.465 [WARNING][6130] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--86gd2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2eed7e34-9d6a-4f7a-a712-82a7a599696d", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 44, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-3bc793b712", ContainerID:"b3f03cd150e69fcb73547d36c314a7c005b5233fa812b88df30d275ccc7dc855", Pod:"coredns-668d6bf9bc-86gd2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia9db2ac3380", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:45:01.495991 env[1565]: 2025-11-01 00:45:01.465 [INFO][6130] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78" Nov 1 00:45:01.495991 env[1565]: 2025-11-01 00:45:01.465 [INFO][6130] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78" iface="eth0" netns="" Nov 1 00:45:01.495991 env[1565]: 2025-11-01 00:45:01.465 [INFO][6130] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78" Nov 1 00:45:01.495991 env[1565]: 2025-11-01 00:45:01.465 [INFO][6130] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78" Nov 1 00:45:01.495991 env[1565]: 2025-11-01 00:45:01.484 [INFO][6148] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78" HandleID="k8s-pod-network.de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78" Workload="ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--86gd2-eth0" Nov 1 00:45:01.495991 env[1565]: 2025-11-01 00:45:01.484 [INFO][6148] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:45:01.495991 env[1565]: 2025-11-01 00:45:01.484 [INFO][6148] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:45:01.495991 env[1565]: 2025-11-01 00:45:01.491 [WARNING][6148] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78" HandleID="k8s-pod-network.de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78" Workload="ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--86gd2-eth0" Nov 1 00:45:01.495991 env[1565]: 2025-11-01 00:45:01.491 [INFO][6148] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78" HandleID="k8s-pod-network.de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78" Workload="ci--3510.3.8--n--3bc793b712-k8s-coredns--668d6bf9bc--86gd2-eth0" Nov 1 00:45:01.495991 env[1565]: 2025-11-01 00:45:01.493 [INFO][6148] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:45:01.495991 env[1565]: 2025-11-01 00:45:01.494 [INFO][6130] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78" Nov 1 00:45:01.496637 env[1565]: time="2025-11-01T00:45:01.495991349Z" level=info msg="TearDown network for sandbox \"de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78\" successfully" Nov 1 00:45:01.498393 env[1565]: time="2025-11-01T00:45:01.498339849Z" level=info msg="RemovePodSandbox \"de15a5c643087fd5e444c8bac163e3bdab5a5646ee124df761536fa923519c78\" returns successfully" Nov 1 00:45:01.498829 env[1565]: time="2025-11-01T00:45:01.498772273Z" level=info msg="StopPodSandbox for \"87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408\"" Nov 1 00:45:01.562649 env[1565]: 2025-11-01 00:45:01.531 [WARNING][6176] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--3bc793b712-k8s-csi--node--driver--qpkjg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ce15fc81-d33c-45b3-b08a-5d312fb076f0", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 44, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-3bc793b712", ContainerID:"03421fe329cbcc570a6e7bba2e54f16b016021a58ba46685ffe77439f803662f", Pod:"csi-node-driver-qpkjg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.3.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali94fde579e17", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:45:01.562649 env[1565]: 2025-11-01 00:45:01.531 [INFO][6176] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408" Nov 1 00:45:01.562649 env[1565]: 2025-11-01 00:45:01.531 [INFO][6176] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408" iface="eth0" netns="" Nov 1 00:45:01.562649 env[1565]: 2025-11-01 00:45:01.531 [INFO][6176] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408" Nov 1 00:45:01.562649 env[1565]: 2025-11-01 00:45:01.531 [INFO][6176] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408" Nov 1 00:45:01.562649 env[1565]: 2025-11-01 00:45:01.550 [INFO][6195] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408" HandleID="k8s-pod-network.87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408" Workload="ci--3510.3.8--n--3bc793b712-k8s-csi--node--driver--qpkjg-eth0" Nov 1 00:45:01.562649 env[1565]: 2025-11-01 00:45:01.550 [INFO][6195] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:45:01.562649 env[1565]: 2025-11-01 00:45:01.550 [INFO][6195] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:45:01.562649 env[1565]: 2025-11-01 00:45:01.558 [WARNING][6195] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408" HandleID="k8s-pod-network.87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408" Workload="ci--3510.3.8--n--3bc793b712-k8s-csi--node--driver--qpkjg-eth0" Nov 1 00:45:01.562649 env[1565]: 2025-11-01 00:45:01.558 [INFO][6195] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408" HandleID="k8s-pod-network.87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408" Workload="ci--3510.3.8--n--3bc793b712-k8s-csi--node--driver--qpkjg-eth0" Nov 1 00:45:01.562649 env[1565]: 2025-11-01 00:45:01.559 [INFO][6195] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:45:01.562649 env[1565]: 2025-11-01 00:45:01.561 [INFO][6176] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408" Nov 1 00:45:01.563320 env[1565]: time="2025-11-01T00:45:01.562682170Z" level=info msg="TearDown network for sandbox \"87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408\" successfully" Nov 1 00:45:01.563320 env[1565]: time="2025-11-01T00:45:01.562721707Z" level=info msg="StopPodSandbox for \"87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408\" returns successfully" Nov 1 00:45:01.563320 env[1565]: time="2025-11-01T00:45:01.563147547Z" level=info msg="RemovePodSandbox for \"87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408\"" Nov 1 00:45:01.563320 env[1565]: time="2025-11-01T00:45:01.563180373Z" level=info msg="Forcibly stopping sandbox \"87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408\"" Nov 1 00:45:01.625120 env[1565]: 2025-11-01 00:45:01.593 [WARNING][6222] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--3bc793b712-k8s-csi--node--driver--qpkjg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ce15fc81-d33c-45b3-b08a-5d312fb076f0", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 44, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-3bc793b712", ContainerID:"03421fe329cbcc570a6e7bba2e54f16b016021a58ba46685ffe77439f803662f", Pod:"csi-node-driver-qpkjg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.3.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali94fde579e17", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:45:01.625120 env[1565]: 2025-11-01 00:45:01.594 [INFO][6222] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408" Nov 1 00:45:01.625120 env[1565]: 2025-11-01 00:45:01.594 [INFO][6222] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408" iface="eth0" netns="" Nov 1 00:45:01.625120 env[1565]: 2025-11-01 00:45:01.594 [INFO][6222] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408" Nov 1 00:45:01.625120 env[1565]: 2025-11-01 00:45:01.594 [INFO][6222] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408" Nov 1 00:45:01.625120 env[1565]: 2025-11-01 00:45:01.613 [INFO][6240] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408" HandleID="k8s-pod-network.87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408" Workload="ci--3510.3.8--n--3bc793b712-k8s-csi--node--driver--qpkjg-eth0" Nov 1 00:45:01.625120 env[1565]: 2025-11-01 00:45:01.613 [INFO][6240] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:45:01.625120 env[1565]: 2025-11-01 00:45:01.613 [INFO][6240] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:45:01.625120 env[1565]: 2025-11-01 00:45:01.620 [WARNING][6240] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408" HandleID="k8s-pod-network.87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408" Workload="ci--3510.3.8--n--3bc793b712-k8s-csi--node--driver--qpkjg-eth0" Nov 1 00:45:01.625120 env[1565]: 2025-11-01 00:45:01.620 [INFO][6240] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408" HandleID="k8s-pod-network.87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408" Workload="ci--3510.3.8--n--3bc793b712-k8s-csi--node--driver--qpkjg-eth0" Nov 1 00:45:01.625120 env[1565]: 2025-11-01 00:45:01.622 [INFO][6240] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:45:01.625120 env[1565]: 2025-11-01 00:45:01.623 [INFO][6222] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408" Nov 1 00:45:01.625952 env[1565]: time="2025-11-01T00:45:01.625119439Z" level=info msg="TearDown network for sandbox \"87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408\" successfully" Nov 1 00:45:01.627329 env[1565]: time="2025-11-01T00:45:01.627277083Z" level=info msg="RemovePodSandbox \"87a12ebe146908af883f3986696c20a7a805fd49127952f7a817dd9688094408\" returns successfully" Nov 1 00:45:03.471000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:45:03.499745 kernel: kauditd_printk_skb: 14 callbacks suppressed Nov 1 00:45:03.499789 kernel: audit: type=1400 audit(1761957903.471:1314): avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:45:03.471000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0017fba20 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:45:03.711994 kernel: audit: type=1300 audit(1761957903.471:1314): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0017fba20 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:45:03.712069 kernel: audit: type=1327 audit(1761957903.471:1314): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:45:03.471000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:45:03.806318 kernel: audit: type=1400 audit(1761957903.472:1315): avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:45:03.472000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:45:03.472000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c000b82f60 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:45:03.896562 kernel: audit: type=1300 audit(1761957903.472:1315): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c000b82f60 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:45:03.472000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:45:04.112082 kernel: audit: type=1327 audit(1761957903.472:1315): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:45:04.112116 kernel: audit: type=1400 audit(1761957903.473:1316): avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:45:03.473000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:45:04.202334 kernel: audit: type=1300 audit(1761957903.473:1316): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c001c31360 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:45:03.473000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c001c31360 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:45:04.322870 kernel: audit: type=1327 audit(1761957903.473:1316): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:45:03.473000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:45:04.416826 kernel: audit: type=1400 audit(1761957903.474:1317): avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:45:03.474000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:45:03.474000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c000b83120 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:45:03.474000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:45:06.531911 kubelet[2501]: E1101 00:45:06.531775 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxfpm" podUID="135646f8-0c66-45b5-80ce-9bb45c825de7" Nov 1 00:45:07.532395 kubelet[2501]: E1101 00:45:07.532289 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c874fc74-qqjt8" podUID="30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d" Nov 1 00:45:09.529890 kubelet[2501]: E1101 00:45:09.529843 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-859bf66984-8h8hn" podUID="9cd09edd-44db-4b31-b369-b622badfedc3" Nov 1 00:45:10.530197 kubelet[2501]: E1101 00:45:10.530156 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpkjg" podUID="ce15fc81-d33c-45b3-b08a-5d312fb076f0" Nov 1 00:45:11.531090 kubelet[2501]: E1101 00:45:11.530984 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-vws84" podUID="af4cf953-58b5-4727-a1f5-dcd340748032" Nov 1 00:45:14.531567 kubelet[2501]: E1101 00:45:14.531428 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-s7kdv" podUID="2f6bb8ca-d7c7-4d64-919c-e85097fdc068" Nov 1 00:45:18.532331 env[1565]: time="2025-11-01T00:45:18.532216123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:45:18.899724 env[1565]: time="2025-11-01T00:45:18.899428174Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:45:18.900599 env[1565]: time="2025-11-01T00:45:18.900422893Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:45:18.900991 kubelet[2501]: E1101 00:45:18.900861 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:45:18.900991 kubelet[2501]: E1101 00:45:18.900967 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:45:18.902023 kubelet[2501]: E1101 00:45:18.901267 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vfh6h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-bxfpm_calico-system(135646f8-0c66-45b5-80ce-9bb45c825de7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:45:18.902775 kubelet[2501]: E1101 00:45:18.902643 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxfpm" podUID="135646f8-0c66-45b5-80ce-9bb45c825de7" Nov 1 00:45:20.533180 env[1565]: time="2025-11-01T00:45:20.533072942Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:45:20.900343 env[1565]: time="2025-11-01T00:45:20.900071214Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:45:20.900980 env[1565]: time="2025-11-01T00:45:20.900838609Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:45:20.901348 kubelet[2501]: E1101 00:45:20.901250 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:45:20.902162 kubelet[2501]: E1101 00:45:20.901350 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:45:20.902162 kubelet[2501]: E1101 00:45:20.901742 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p47qt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-859bf66984-8h8hn_calico-system(9cd09edd-44db-4b31-b369-b622badfedc3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:45:20.903231 kubelet[2501]: E1101 00:45:20.903157 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-859bf66984-8h8hn" podUID="9cd09edd-44db-4b31-b369-b622badfedc3" Nov 1 00:45:21.529709 env[1565]: time="2025-11-01T00:45:21.529667939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:45:21.892422 env[1565]: time="2025-11-01T00:45:21.892180327Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:45:21.893329 env[1565]: time="2025-11-01T00:45:21.893038944Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:45:21.893612 kubelet[2501]: E1101 00:45:21.893455 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:45:21.893862 kubelet[2501]: E1101 00:45:21.893606 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:45:21.894059 kubelet[2501]: E1101 00:45:21.893841 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:03c5ae372c8f45a08ebfcb08205f19e3,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-slg5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84c874fc74-qqjt8_calico-system(30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:45:21.896883 env[1565]: time="2025-11-01T00:45:21.896773731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:45:22.261862 env[1565]: time="2025-11-01T00:45:22.261721831Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:45:22.262808 env[1565]: time="2025-11-01T00:45:22.262651466Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:45:22.263240 kubelet[2501]: E1101 00:45:22.263124 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:45:22.263240 kubelet[2501]: E1101 00:45:22.263222 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:45:22.264186 kubelet[2501]: E1101 00:45:22.263478 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-slg5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84c874fc74-qqjt8_calico-system(30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:45:22.264983 kubelet[2501]: E1101 00:45:22.264853 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c874fc74-qqjt8" podUID="30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d" Nov 1 00:45:22.531594 env[1565]: time="2025-11-01T00:45:22.531384053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:45:22.898255 env[1565]: time="2025-11-01T00:45:22.897985428Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:45:22.899226 env[1565]: time="2025-11-01T00:45:22.899101800Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:45:22.899466 kubelet[2501]: E1101 00:45:22.899407 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:45:22.899619 kubelet[2501]: E1101 00:45:22.899485 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:45:22.899769 kubelet[2501]: E1101 00:45:22.899687 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l4wc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qpkjg_calico-system(ce15fc81-d33c-45b3-b08a-5d312fb076f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:45:22.902301 env[1565]: time="2025-11-01T00:45:22.902207877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:45:23.281348 env[1565]: time="2025-11-01T00:45:23.281285693Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:45:23.281821 env[1565]: time="2025-11-01T00:45:23.281763188Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:45:23.281932 kubelet[2501]: E1101 00:45:23.281904 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:45:23.282136 kubelet[2501]: E1101 00:45:23.281936 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:45:23.282136 kubelet[2501]: E1101 00:45:23.282006 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l4wc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qpkjg_calico-system(ce15fc81-d33c-45b3-b08a-5d312fb076f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:45:23.283234 kubelet[2501]: E1101 00:45:23.283163 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpkjg" podUID="ce15fc81-d33c-45b3-b08a-5d312fb076f0" Nov 1 00:45:24.530119 env[1565]: time="2025-11-01T00:45:24.530087034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:45:24.896523 env[1565]: time="2025-11-01T00:45:24.896441048Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:45:24.897090 env[1565]: time="2025-11-01T00:45:24.897034726Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:45:24.897195 kubelet[2501]: E1101 00:45:24.897151 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:45:24.897195 kubelet[2501]: E1101 00:45:24.897183 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:45:24.897419 kubelet[2501]: E1101 00:45:24.897268 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-48hx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d69b6c6c-vws84_calico-apiserver(af4cf953-58b5-4727-a1f5-dcd340748032): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:45:24.898363 kubelet[2501]: E1101 00:45:24.898315 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-vws84" podUID="af4cf953-58b5-4727-a1f5-dcd340748032" Nov 1 00:45:29.530055 env[1565]: time="2025-11-01T00:45:29.530028011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:45:29.884191 env[1565]: time="2025-11-01T00:45:29.883893415Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:45:29.885049 env[1565]: time="2025-11-01T00:45:29.884913783Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:45:29.885483 kubelet[2501]: E1101 00:45:29.885397 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:45:29.886602 kubelet[2501]: E1101 00:45:29.885530 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:45:29.886602 kubelet[2501]: E1101 00:45:29.885955 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8pvv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d69b6c6c-s7kdv_calico-apiserver(2f6bb8ca-d7c7-4d64-919c-e85097fdc068): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:45:29.887580 kubelet[2501]: E1101 00:45:29.887412 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-s7kdv" podUID="2f6bb8ca-d7c7-4d64-919c-e85097fdc068" Nov 1 00:45:32.531068 kubelet[2501]: E1101 00:45:32.531028 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-859bf66984-8h8hn" podUID="9cd09edd-44db-4b31-b369-b622badfedc3" Nov 1 00:45:33.531802 kubelet[2501]: E1101 00:45:33.531681 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxfpm" podUID="135646f8-0c66-45b5-80ce-9bb45c825de7" Nov 1 00:45:33.533151 kubelet[2501]: E1101 00:45:33.533021 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c874fc74-qqjt8" podUID="30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d" Nov 1 00:45:37.530294 kubelet[2501]: E1101 00:45:37.530239 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpkjg" podUID="ce15fc81-d33c-45b3-b08a-5d312fb076f0" Nov 1 00:45:39.530798 kubelet[2501]: E1101 00:45:39.530698 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-vws84" podUID="af4cf953-58b5-4727-a1f5-dcd340748032" Nov 1 00:45:43.529335 kubelet[2501]: E1101 00:45:43.529309 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-s7kdv" podUID="2f6bb8ca-d7c7-4d64-919c-e85097fdc068" Nov 1 00:45:45.530284 kubelet[2501]: E1101 00:45:45.530240 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-859bf66984-8h8hn" podUID="9cd09edd-44db-4b31-b369-b622badfedc3" Nov 1 00:45:46.531806 kubelet[2501]: E1101 00:45:46.531697 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxfpm" podUID="135646f8-0c66-45b5-80ce-9bb45c825de7" Nov 1 00:45:48.530703 kubelet[2501]: E1101 00:45:48.530671 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c874fc74-qqjt8" podUID="30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d" Nov 1 00:45:50.531983 kubelet[2501]: E1101 00:45:50.531866 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-vws84" podUID="af4cf953-58b5-4727-a1f5-dcd340748032" Nov 1 00:45:52.535773 kubelet[2501]: E1101 00:45:52.535657 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpkjg" podUID="ce15fc81-d33c-45b3-b08a-5d312fb076f0" Nov 1 00:45:54.531259 kubelet[2501]: E1101 00:45:54.531162 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-s7kdv" podUID="2f6bb8ca-d7c7-4d64-919c-e85097fdc068" Nov 1 00:45:56.861000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:45:56.889228 kernel: kauditd_printk_skb: 2 callbacks suppressed Nov 1 00:45:56.889312 kernel: audit: type=1400 audit(1761957956.861:1318): avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:45:56.861000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c001cb2260 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:45:56.979570 kernel: audit: type=1300 audit(1761957956.861:1318): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c001cb2260 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:45:56.861000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:45:57.190968 kernel: audit: type=1327 audit(1761957956.861:1318): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:45:57.191016 kernel: audit: type=1400 audit(1761957956.861:1319): avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sdb9" ino=520987 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:45:56.861000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sdb9" ino=520987 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:45:57.281713 kernel: audit: type=1300 audit(1761957956.861:1319): arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c001f10bd0 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:45:56.861000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c001f10bd0 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:45:56.861000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:45:57.495375 kernel: audit: type=1327 audit(1761957956.861:1319): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:45:57.495439 kernel: audit: type=1400 audit(1761957956.946:1321): avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:45:56.946000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:45:57.529880 kubelet[2501]: E1101 00:45:57.529825 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxfpm" podUID="135646f8-0c66-45b5-80ce-9bb45c825de7" Nov 1 00:45:56.946000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="sdb9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:45:57.676249 kernel: audit: type=1400 audit(1761957956.946:1320): avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="sdb9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:45:57.676327 kernel: audit: type=1300 audit(1761957956.946:1320): arch=c000003e syscall=254 success=no exit=-13 a0=72 a1=c012bf4e70 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:45:56.946000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=72 a1=c012bf4e70 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:45:57.774652 kernel: audit: type=1300 audit(1761957956.946:1321): arch=c000003e syscall=254 success=no exit=-13 a0=70 a1=c0038eed00 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:45:56.946000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=70 a1=c0038eed00 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:45:56.946000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:45:56.946000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:45:56.950000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:45:56.950000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=70 a1=c0036870a0 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:45:56.950000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:45:56.950000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="sdb9" ino=520989 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:45:56.950000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sdb9" ino=520987 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:45:56.950000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=70 a1=c00e677590 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:45:56.950000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:45:56.950000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=72 a1=c008f0d470 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:45:56.950000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:45:56.950000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sdb9" ino=520987 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:45:56.950000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=70 a1=c007f66690 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:45:56.950000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:45:58.530622 kubelet[2501]: E1101 00:45:58.530490 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-859bf66984-8h8hn" podUID="9cd09edd-44db-4b31-b369-b622badfedc3" Nov 1 00:45:59.531893 kubelet[2501]: E1101 00:45:59.531749 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c874fc74-qqjt8" podUID="30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d" Nov 1 00:46:03.471000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:46:03.510554 kernel: kauditd_printk_skb: 14 callbacks suppressed Nov 1 00:46:03.510646 kernel: audit: type=1400 audit(1761957963.471:1326): avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:46:03.529624 kubelet[2501]: E1101 00:46:03.529576 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-vws84" podUID="af4cf953-58b5-4727-a1f5-dcd340748032" Nov 1 00:46:03.471000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0009e71c0 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:46:03.723152 kernel: audit: type=1300 audit(1761957963.471:1326): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0009e71c0 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:46:03.723222 kernel: audit: type=1327 audit(1761957963.471:1326): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:46:03.471000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:46:03.472000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:46:03.906799 kernel: audit: type=1400 audit(1761957963.472:1327): avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:46:03.906862 kernel: audit: type=1300 audit(1761957963.472:1327): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c001cb2480 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:46:03.472000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c001cb2480 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:46:04.028573 kernel: audit: type=1327 audit(1761957963.472:1327): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:46:03.472000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:46:03.509000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:46:04.213788 kernel: audit: type=1400 audit(1761957963.509:1328): avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:46:04.213851 kernel: audit: type=1300 audit(1761957963.509:1328): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c00037ccc0 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:46:03.509000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c00037ccc0 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:46:04.335579 kernel: audit: type=1327 audit(1761957963.509:1328): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:46:03.509000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:46:04.429779 kernel: audit: type=1400 audit(1761957963.509:1329): avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:46:03.509000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:46:03.509000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0007fd1a0 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:46:03.509000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:46:05.530251 env[1565]: time="2025-11-01T00:46:05.530203303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:46:05.890688 env[1565]: time="2025-11-01T00:46:05.890446740Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:46:05.891669 env[1565]: time="2025-11-01T00:46:05.891490028Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:46:05.892059 kubelet[2501]: E1101 00:46:05.891950 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:46:05.892059 kubelet[2501]: E1101 00:46:05.892041 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:46:05.892999 kubelet[2501]: E1101 00:46:05.892281 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l4wc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qpkjg_calico-system(ce15fc81-d33c-45b3-b08a-5d312fb076f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:46:05.895175 env[1565]: time="2025-11-01T00:46:05.895066840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:46:06.292204 env[1565]: time="2025-11-01T00:46:06.292144119Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:46:06.296444 env[1565]: time="2025-11-01T00:46:06.296391466Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:46:06.296571 kubelet[2501]: E1101 00:46:06.296548 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:46:06.296619 kubelet[2501]: E1101 00:46:06.296580 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:46:06.296670 kubelet[2501]: E1101 00:46:06.296649 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l4wc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qpkjg_calico-system(ce15fc81-d33c-45b3-b08a-5d312fb076f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:46:06.297771 kubelet[2501]: E1101 00:46:06.297752 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpkjg" podUID="ce15fc81-d33c-45b3-b08a-5d312fb076f0" Nov 1 00:46:06.529939 kubelet[2501]: E1101 00:46:06.529889 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-s7kdv" podUID="2f6bb8ca-d7c7-4d64-919c-e85097fdc068" Nov 1 00:46:09.529589 env[1565]: time="2025-11-01T00:46:09.529561263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:46:09.887836 env[1565]: time="2025-11-01T00:46:09.887605359Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:46:09.888651 env[1565]: time="2025-11-01T00:46:09.888536767Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:46:09.888939 kubelet[2501]: E1101 00:46:09.888836 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:46:09.888939 kubelet[2501]: E1101 00:46:09.888912 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:46:09.889796 kubelet[2501]: E1101 00:46:09.889123 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vfh6h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-bxfpm_calico-system(135646f8-0c66-45b5-80ce-9bb45c825de7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:46:09.890458 kubelet[2501]: E1101 00:46:09.890384 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxfpm" podUID="135646f8-0c66-45b5-80ce-9bb45c825de7" Nov 1 00:46:10.533312 env[1565]: time="2025-11-01T00:46:10.533188599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:46:10.923620 env[1565]: time="2025-11-01T00:46:10.923552240Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:46:10.924004 env[1565]: time="2025-11-01T00:46:10.923939810Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:46:10.924159 kubelet[2501]: E1101 00:46:10.924105 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:46:10.924159 kubelet[2501]: E1101 00:46:10.924140 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:46:10.924402 kubelet[2501]: E1101 00:46:10.924230 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p47qt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-859bf66984-8h8hn_calico-system(9cd09edd-44db-4b31-b369-b622badfedc3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:46:10.926077 kubelet[2501]: E1101 00:46:10.926027 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-859bf66984-8h8hn" podUID="9cd09edd-44db-4b31-b369-b622badfedc3" Nov 1 00:46:11.529681 env[1565]: time="2025-11-01T00:46:11.529598847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:46:11.912447 env[1565]: time="2025-11-01T00:46:11.912205045Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:46:11.913447 env[1565]: time="2025-11-01T00:46:11.913333771Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:46:11.913942 kubelet[2501]: E1101 00:46:11.913843 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:46:11.914210 kubelet[2501]: E1101 00:46:11.913967 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:46:11.914468 kubelet[2501]: E1101 00:46:11.914324 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:03c5ae372c8f45a08ebfcb08205f19e3,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-slg5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84c874fc74-qqjt8_calico-system(30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:46:11.917665 env[1565]: time="2025-11-01T00:46:11.917593042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:46:12.303796 env[1565]: time="2025-11-01T00:46:12.303703028Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:46:12.305021 env[1565]: time="2025-11-01T00:46:12.304860577Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:46:12.305422 kubelet[2501]: E1101 00:46:12.305323 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:46:12.306144 kubelet[2501]: E1101 00:46:12.305441 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:46:12.306144 kubelet[2501]: E1101 00:46:12.305752 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-slg5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84c874fc74-qqjt8_calico-system(30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:46:12.307286 kubelet[2501]: E1101 00:46:12.307152 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c874fc74-qqjt8" podUID="30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d" Nov 1 00:46:15.531977 env[1565]: time="2025-11-01T00:46:15.531890653Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:46:15.893559 env[1565]: time="2025-11-01T00:46:15.893449960Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:46:15.894111 env[1565]: time="2025-11-01T00:46:15.894047861Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:46:15.894278 kubelet[2501]: E1101 00:46:15.894215 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:46:15.894278 kubelet[2501]: E1101 00:46:15.894255 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:46:15.894549 kubelet[2501]: E1101 00:46:15.894347 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-48hx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d69b6c6c-vws84_calico-apiserver(af4cf953-58b5-4727-a1f5-dcd340748032): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:46:15.895594 kubelet[2501]: E1101 00:46:15.895538 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-vws84" podUID="af4cf953-58b5-4727-a1f5-dcd340748032" Nov 1 00:46:19.532289 kubelet[2501]: E1101 00:46:19.532057 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpkjg" podUID="ce15fc81-d33c-45b3-b08a-5d312fb076f0" Nov 1 00:46:20.530132 env[1565]: time="2025-11-01T00:46:20.530103556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:46:20.910606 env[1565]: time="2025-11-01T00:46:20.910347371Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:46:20.911361 env[1565]: time="2025-11-01T00:46:20.911252503Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:46:20.911777 kubelet[2501]: E1101 00:46:20.911655 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:46:20.911777 kubelet[2501]: E1101 00:46:20.911753 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:46:20.912647 kubelet[2501]: E1101 00:46:20.912045 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8pvv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d69b6c6c-s7kdv_calico-apiserver(2f6bb8ca-d7c7-4d64-919c-e85097fdc068): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:46:20.914007 kubelet[2501]: E1101 00:46:20.913899 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-s7kdv" podUID="2f6bb8ca-d7c7-4d64-919c-e85097fdc068" Nov 1 00:46:23.536397 kubelet[2501]: E1101 00:46:23.536249 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxfpm" podUID="135646f8-0c66-45b5-80ce-9bb45c825de7" Nov 1 00:46:23.537711 kubelet[2501]: E1101 00:46:23.537589 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c874fc74-qqjt8" podUID="30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d" Nov 1 00:46:24.531015 kubelet[2501]: E1101 00:46:24.530878 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-859bf66984-8h8hn" podUID="9cd09edd-44db-4b31-b369-b622badfedc3" Nov 1 00:46:30.533382 kubelet[2501]: E1101 00:46:30.533212 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-vws84" podUID="af4cf953-58b5-4727-a1f5-dcd340748032" Nov 1 00:46:30.538014 kubelet[2501]: E1101 00:46:30.537871 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpkjg" podUID="ce15fc81-d33c-45b3-b08a-5d312fb076f0" Nov 1 00:46:34.530039 kubelet[2501]: E1101 00:46:34.530016 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-s7kdv" podUID="2f6bb8ca-d7c7-4d64-919c-e85097fdc068" Nov 1 00:46:35.529738 kubelet[2501]: E1101 00:46:35.529705 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxfpm" podUID="135646f8-0c66-45b5-80ce-9bb45c825de7" Nov 1 00:46:37.533016 kubelet[2501]: E1101 00:46:37.532880 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c874fc74-qqjt8" podUID="30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d" Nov 1 00:46:39.531459 kubelet[2501]: E1101 00:46:39.531351 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-859bf66984-8h8hn" podUID="9cd09edd-44db-4b31-b369-b622badfedc3" Nov 1 00:46:41.529516 kubelet[2501]: E1101 00:46:41.529484 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-vws84" podUID="af4cf953-58b5-4727-a1f5-dcd340748032" Nov 1 00:46:45.532681 kubelet[2501]: E1101 00:46:45.532558 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpkjg" podUID="ce15fc81-d33c-45b3-b08a-5d312fb076f0" Nov 1 00:46:48.531012 kubelet[2501]: E1101 00:46:48.530885 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxfpm" podUID="135646f8-0c66-45b5-80ce-9bb45c825de7" Nov 1 00:46:49.529569 kubelet[2501]: E1101 00:46:49.529541 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-s7kdv" podUID="2f6bb8ca-d7c7-4d64-919c-e85097fdc068" Nov 1 00:46:49.529886 kubelet[2501]: E1101 00:46:49.529819 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c874fc74-qqjt8" podUID="30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d" Nov 1 00:46:52.529462 kubelet[2501]: E1101 00:46:52.529432 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-vws84" podUID="af4cf953-58b5-4727-a1f5-dcd340748032" Nov 1 00:46:53.530893 kubelet[2501]: E1101 00:46:53.530769 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-859bf66984-8h8hn" podUID="9cd09edd-44db-4b31-b369-b622badfedc3" Nov 1 00:46:56.862000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sdb9" ino=520987 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:46:56.905959 kernel: kauditd_printk_skb: 2 callbacks suppressed Nov 1 00:46:56.906060 kernel: audit: type=1400 audit(1761958016.862:1330): avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sdb9" ino=520987 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:46:56.862000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0014c4d50 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:46:57.117893 kernel: audit: type=1300 audit(1761958016.862:1330): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0014c4d50 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:46:57.117963 kernel: audit: type=1327 audit(1761958016.862:1330): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:46:56.862000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:46:57.211139 kernel: audit: type=1400 audit(1761958016.862:1331): avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:46:56.862000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:46:56.862000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c001cb34c0 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:46:57.421691 kernel: audit: type=1300 audit(1761958016.862:1331): arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c001cb34c0 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:46:57.421761 kernel: audit: type=1327 audit(1761958016.862:1331): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:46:56.862000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:46:57.514908 kernel: audit: type=1400 audit(1761958016.947:1332): avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="sdb9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:46:56.947000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="sdb9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:46:57.605602 kernel: audit: type=1300 audit(1761958016.947:1332): arch=c000003e syscall=254 success=no exit=-13 a0=67 a1=c0098700f0 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:46:56.947000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=67 a1=c0098700f0 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:46:56.947000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:46:57.797402 kernel: audit: type=1327 audit(1761958016.947:1332): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:46:57.797486 kernel: audit: type=1400 audit(1761958016.947:1333): avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:46:56.947000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:46:56.947000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=69 a1=c01288b8e0 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:46:56.947000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:46:56.952000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sdb9" ino=520987 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:46:56.952000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=67 a1=c009870210 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:46:56.952000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:46:56.952000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:46:56.952000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=67 a1=c00523dcc0 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:46:56.952000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:46:56.952000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="sdb9" ino=520989 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:46:56.952000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=67 a1=c0098702a0 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:46:56.952000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:46:56.952000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sdb9" ino=520987 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:46:56.952000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=69 a1=c013045f50 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:46:56.952000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:47:00.530094 kubelet[2501]: E1101 00:47:00.530063 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpkjg" podUID="ce15fc81-d33c-45b3-b08a-5d312fb076f0" Nov 1 00:47:01.531643 kubelet[2501]: E1101 00:47:01.531470 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-s7kdv" podUID="2f6bb8ca-d7c7-4d64-919c-e85097fdc068" Nov 1 00:47:03.474000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:47:03.517552 kernel: kauditd_printk_skb: 14 callbacks suppressed Nov 1 00:47:03.517653 kernel: audit: type=1400 audit(1761958023.474:1338): avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:47:03.530019 kubelet[2501]: E1101 00:47:03.529987 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxfpm" podUID="135646f8-0c66-45b5-80ce-9bb45c825de7" Nov 1 00:47:03.530330 kubelet[2501]: E1101 00:47:03.530244 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c874fc74-qqjt8" podUID="30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d" Nov 1 00:47:03.474000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c001e47300 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:47:03.727953 kernel: audit: type=1300 audit(1761958023.474:1338): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c001e47300 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:47:03.728038 kernel: audit: type=1327 audit(1761958023.474:1338): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:47:03.474000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:47:03.820527 kernel: audit: type=1400 audit(1761958023.474:1339): avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:47:03.474000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:47:03.474000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c002ebae60 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:47:04.030818 kernel: audit: type=1300 audit(1761958023.474:1339): arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c002ebae60 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:47:04.030904 kernel: audit: type=1327 audit(1761958023.474:1339): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:47:03.474000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:47:03.510000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:47:04.214728 kernel: audit: type=1400 audit(1761958023.510:1340): avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:47:04.214807 kernel: audit: type=1300 audit(1761958023.510:1340): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c001cb37c0 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:47:03.510000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c001cb37c0 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:47:04.335569 kernel: audit: type=1327 audit(1761958023.510:1340): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:47:03.510000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:47:04.429050 kernel: audit: type=1400 audit(1761958023.510:1341): avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:47:03.510000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:47:03.510000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c002e44980 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:47:03.510000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:47:04.530918 kubelet[2501]: E1101 00:47:04.530875 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-vws84" podUID="af4cf953-58b5-4727-a1f5-dcd340748032" Nov 1 00:47:06.537042 kubelet[2501]: E1101 00:47:06.536886 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-859bf66984-8h8hn" podUID="9cd09edd-44db-4b31-b369-b622badfedc3" Nov 1 00:47:14.531911 kubelet[2501]: E1101 00:47:14.531804 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpkjg" podUID="ce15fc81-d33c-45b3-b08a-5d312fb076f0" Nov 1 00:47:14.533089 kubelet[2501]: E1101 00:47:14.531907 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c874fc74-qqjt8" podUID="30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d" Nov 1 00:47:15.530662 kubelet[2501]: E1101 00:47:15.530564 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-vws84" podUID="af4cf953-58b5-4727-a1f5-dcd340748032" Nov 1 00:47:15.530662 kubelet[2501]: E1101 00:47:15.530635 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-s7kdv" podUID="2f6bb8ca-d7c7-4d64-919c-e85097fdc068" Nov 1 00:47:18.530509 kubelet[2501]: E1101 00:47:18.530471 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxfpm" podUID="135646f8-0c66-45b5-80ce-9bb45c825de7" Nov 1 00:47:20.532252 kubelet[2501]: E1101 00:47:20.532146 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-859bf66984-8h8hn" podUID="9cd09edd-44db-4b31-b369-b622badfedc3" Nov 1 00:47:26.530300 kubelet[2501]: E1101 00:47:26.530268 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-s7kdv" podUID="2f6bb8ca-d7c7-4d64-919c-e85097fdc068" Nov 1 00:47:27.529379 kubelet[2501]: E1101 00:47:27.529348 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-vws84" podUID="af4cf953-58b5-4727-a1f5-dcd340748032" Nov 1 00:47:28.532460 kubelet[2501]: E1101 00:47:28.532346 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c874fc74-qqjt8" podUID="30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d" Nov 1 00:47:29.529392 env[1565]: time="2025-11-01T00:47:29.529338119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:47:29.883713 env[1565]: time="2025-11-01T00:47:29.883435353Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:47:29.884965 env[1565]: time="2025-11-01T00:47:29.884828089Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:47:29.885317 kubelet[2501]: E1101 00:47:29.885214 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:47:29.886085 kubelet[2501]: E1101 00:47:29.885310 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:47:29.886085 kubelet[2501]: E1101 00:47:29.885587 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l4wc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qpkjg_calico-system(ce15fc81-d33c-45b3-b08a-5d312fb076f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:47:29.888642 env[1565]: time="2025-11-01T00:47:29.888541799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:47:30.247776 env[1565]: time="2025-11-01T00:47:30.247622461Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:47:30.248750 env[1565]: time="2025-11-01T00:47:30.248558270Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:47:30.249187 kubelet[2501]: E1101 00:47:30.249083 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:47:30.249374 kubelet[2501]: E1101 00:47:30.249190 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:47:30.249577 kubelet[2501]: E1101 00:47:30.249443 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l4wc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qpkjg_calico-system(ce15fc81-d33c-45b3-b08a-5d312fb076f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:47:30.250956 kubelet[2501]: E1101 00:47:30.250829 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpkjg" podUID="ce15fc81-d33c-45b3-b08a-5d312fb076f0" Nov 1 00:47:32.529962 env[1565]: time="2025-11-01T00:47:32.529934883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:47:32.880644 env[1565]: time="2025-11-01T00:47:32.880541496Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:47:32.880937 env[1565]: time="2025-11-01T00:47:32.880908999Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:47:32.881253 kubelet[2501]: E1101 00:47:32.881046 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:47:32.881253 kubelet[2501]: E1101 00:47:32.881088 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:47:32.881625 kubelet[2501]: E1101 00:47:32.881282 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p47qt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-859bf66984-8h8hn_calico-system(9cd09edd-44db-4b31-b369-b622badfedc3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:47:32.881784 env[1565]: time="2025-11-01T00:47:32.881353616Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:47:32.882439 kubelet[2501]: E1101 00:47:32.882415 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-859bf66984-8h8hn" podUID="9cd09edd-44db-4b31-b369-b622badfedc3" Nov 1 00:47:33.242007 env[1565]: time="2025-11-01T00:47:33.241852494Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:47:33.242808 env[1565]: time="2025-11-01T00:47:33.242665201Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:47:33.243235 kubelet[2501]: E1101 00:47:33.243113 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:47:33.243235 kubelet[2501]: E1101 00:47:33.243212 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:47:33.243671 kubelet[2501]: E1101 00:47:33.243538 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vfh6h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-bxfpm_calico-system(135646f8-0c66-45b5-80ce-9bb45c825de7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:47:33.245014 kubelet[2501]: E1101 00:47:33.244894 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxfpm" podUID="135646f8-0c66-45b5-80ce-9bb45c825de7" Nov 1 00:47:38.529653 env[1565]: time="2025-11-01T00:47:38.529627748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:47:38.909873 env[1565]: time="2025-11-01T00:47:38.909766162Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:47:38.910262 env[1565]: time="2025-11-01T00:47:38.910206831Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:47:38.910406 kubelet[2501]: E1101 00:47:38.910360 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:47:38.910406 kubelet[2501]: E1101 00:47:38.910391 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:47:38.910635 kubelet[2501]: E1101 00:47:38.910470 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-48hx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d69b6c6c-vws84_calico-apiserver(af4cf953-58b5-4727-a1f5-dcd340748032): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:47:38.911643 kubelet[2501]: E1101 00:47:38.911595 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-vws84" podUID="af4cf953-58b5-4727-a1f5-dcd340748032" Nov 1 00:47:39.530011 env[1565]: time="2025-11-01T00:47:39.529930443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:47:39.926798 env[1565]: time="2025-11-01T00:47:39.926665852Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:47:39.927690 env[1565]: time="2025-11-01T00:47:39.927532984Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:47:39.928086 kubelet[2501]: E1101 00:47:39.927957 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:47:39.928086 kubelet[2501]: E1101 00:47:39.928068 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:47:39.929016 kubelet[2501]: E1101 00:47:39.928349 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:03c5ae372c8f45a08ebfcb08205f19e3,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-slg5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84c874fc74-qqjt8_calico-system(30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:47:39.931565 env[1565]: time="2025-11-01T00:47:39.931431962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:47:40.311422 env[1565]: time="2025-11-01T00:47:40.311148634Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:47:40.312469 env[1565]: time="2025-11-01T00:47:40.312327356Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:47:40.312885 kubelet[2501]: E1101 00:47:40.312758 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:47:40.312885 kubelet[2501]: E1101 00:47:40.312853 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:47:40.313238 kubelet[2501]: E1101 00:47:40.313101 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-slg5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84c874fc74-qqjt8_calico-system(30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:47:40.314661 kubelet[2501]: E1101 00:47:40.314522 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c874fc74-qqjt8" podUID="30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d" Nov 1 00:47:41.532282 env[1565]: time="2025-11-01T00:47:41.532163704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:47:41.533312 kubelet[2501]: E1101 00:47:41.532636 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpkjg" podUID="ce15fc81-d33c-45b3-b08a-5d312fb076f0" Nov 1 00:47:41.902396 env[1565]: time="2025-11-01T00:47:41.902131687Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:47:41.903088 env[1565]: time="2025-11-01T00:47:41.902958445Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:47:41.903527 kubelet[2501]: E1101 00:47:41.903417 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:47:41.903795 kubelet[2501]: E1101 00:47:41.903555 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:47:41.904035 kubelet[2501]: E1101 00:47:41.903855 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8pvv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d69b6c6c-s7kdv_calico-apiserver(2f6bb8ca-d7c7-4d64-919c-e85097fdc068): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:47:41.905288 kubelet[2501]: E1101 00:47:41.905216 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-s7kdv" podUID="2f6bb8ca-d7c7-4d64-919c-e85097fdc068" Nov 1 00:47:46.531685 kubelet[2501]: E1101 00:47:46.531588 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-859bf66984-8h8hn" podUID="9cd09edd-44db-4b31-b369-b622badfedc3" Nov 1 00:47:48.534154 kubelet[2501]: E1101 00:47:48.534113 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxfpm" podUID="135646f8-0c66-45b5-80ce-9bb45c825de7" Nov 1 00:47:50.533729 kubelet[2501]: E1101 00:47:50.533625 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c874fc74-qqjt8" podUID="30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d" Nov 1 00:47:51.531066 kubelet[2501]: E1101 00:47:51.530989 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-vws84" podUID="af4cf953-58b5-4727-a1f5-dcd340748032" Nov 1 00:47:55.531640 kubelet[2501]: E1101 00:47:55.531558 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-s7kdv" podUID="2f6bb8ca-d7c7-4d64-919c-e85097fdc068" Nov 1 00:47:55.532810 kubelet[2501]: E1101 00:47:55.532272 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpkjg" podUID="ce15fc81-d33c-45b3-b08a-5d312fb076f0" Nov 1 00:47:56.861000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:47:56.891011 kernel: kauditd_printk_skb: 2 callbacks suppressed Nov 1 00:47:56.891095 kernel: audit: type=1400 audit(1761958076.861:1342): avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:47:56.861000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0016e21e0 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:47:56.981544 kernel: audit: type=1300 audit(1761958076.861:1342): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0016e21e0 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:47:56.861000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:47:57.192966 kernel: audit: type=1327 audit(1761958076.861:1342): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:47:57.193025 kernel: audit: type=1400 audit(1761958076.861:1343): avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sdb9" ino=520987 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:47:56.861000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sdb9" ino=520987 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:47:57.283785 kernel: audit: type=1300 audit(1761958076.861:1343): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c002c05a40 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:47:56.861000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c002c05a40 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:47:57.404467 kernel: audit: type=1327 audit(1761958076.861:1343): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:47:56.861000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:47:57.497736 kernel: audit: type=1400 audit(1761958076.947:1344): avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="sdb9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:47:56.947000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="sdb9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:47:57.588573 kernel: audit: type=1300 audit(1761958076.947:1344): arch=c000003e syscall=254 success=no exit=-13 a0=67 a1=c008e43110 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:47:56.947000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=67 a1=c008e43110 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:47:56.947000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:47:57.780412 kernel: audit: type=1327 audit(1761958076.947:1344): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:47:57.780487 kernel: audit: type=1400 audit(1761958076.947:1345): avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:47:56.947000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:47:56.947000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=69 a1=c0054efd00 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:47:56.947000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:47:56.952000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:47:56.952000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=67 a1=c005b535c0 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:47:56.952000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:47:56.952000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="sdb9" ino=520989 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:47:56.952000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=69 a1=c007340900 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:47:56.952000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:47:56.952000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sdb9" ino=520987 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:47:56.952000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=69 a1=c0088de000 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:47:56.952000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sdb9" ino=520987 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:47:56.952000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:47:56.952000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=67 a1=c00670df50 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:47:56.952000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:47:59.531130 kubelet[2501]: E1101 00:47:59.530982 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-859bf66984-8h8hn" podUID="9cd09edd-44db-4b31-b369-b622badfedc3" Nov 1 00:47:59.531130 kubelet[2501]: E1101 00:47:59.530982 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxfpm" podUID="135646f8-0c66-45b5-80ce-9bb45c825de7" Nov 1 00:48:03.473000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:48:03.501968 kernel: kauditd_printk_skb: 14 callbacks suppressed Nov 1 00:48:03.502057 kernel: audit: type=1400 audit(1761958083.473:1350): avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:48:03.473000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:48:03.683331 kernel: audit: type=1400 audit(1761958083.473:1351): avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:48:03.683376 kernel: audit: type=1300 audit(1761958083.473:1350): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c000e9e8e0 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:48:03.473000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c000e9e8e0 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:48:03.473000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:48:03.895591 kernel: audit: type=1327 audit(1761958083.473:1350): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:48:03.895657 kernel: audit: type=1300 audit(1761958083.473:1351): arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000c1f5a0 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:48:03.473000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000c1f5a0 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:48:03.473000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:48:04.109675 kernel: audit: type=1327 audit(1761958083.473:1351): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:48:04.109741 kernel: audit: type=1400 audit(1761958083.510:1352): avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:48:03.510000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:48:04.199878 kernel: audit: type=1300 audit(1761958083.510:1352): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c001d42d00 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:48:03.510000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c001d42d00 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:48:04.320592 kernel: audit: type=1327 audit(1761958083.510:1352): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:48:03.510000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:48:04.414156 kernel: audit: type=1400 audit(1761958083.510:1353): avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:48:03.510000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:48:03.510000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c001d42d20 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:48:03.510000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:48:04.531006 kubelet[2501]: E1101 00:48:04.530976 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c874fc74-qqjt8" podUID="30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d" Nov 1 00:48:05.531756 kubelet[2501]: E1101 00:48:05.531658 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-vws84" podUID="af4cf953-58b5-4727-a1f5-dcd340748032" Nov 1 00:48:08.531387 kubelet[2501]: E1101 00:48:08.531238 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-s7kdv" podUID="2f6bb8ca-d7c7-4d64-919c-e85097fdc068" Nov 1 00:48:10.530294 kubelet[2501]: E1101 00:48:10.530238 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpkjg" podUID="ce15fc81-d33c-45b3-b08a-5d312fb076f0" Nov 1 00:48:13.532156 kubelet[2501]: E1101 00:48:13.532014 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxfpm" podUID="135646f8-0c66-45b5-80ce-9bb45c825de7" Nov 1 00:48:14.530212 kubelet[2501]: E1101 00:48:14.530181 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-859bf66984-8h8hn" podUID="9cd09edd-44db-4b31-b369-b622badfedc3" Nov 1 00:48:15.532928 kubelet[2501]: E1101 00:48:15.532799 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c874fc74-qqjt8" podUID="30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d" Nov 1 00:48:18.531202 kubelet[2501]: E1101 00:48:18.531072 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-vws84" podUID="af4cf953-58b5-4727-a1f5-dcd340748032" Nov 1 00:48:20.529472 kubelet[2501]: E1101 00:48:20.529448 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-s7kdv" podUID="2f6bb8ca-d7c7-4d64-919c-e85097fdc068" Nov 1 00:48:23.533013 kubelet[2501]: E1101 00:48:23.532876 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpkjg" podUID="ce15fc81-d33c-45b3-b08a-5d312fb076f0" Nov 1 00:48:24.530902 kubelet[2501]: E1101 00:48:24.530805 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxfpm" podUID="135646f8-0c66-45b5-80ce-9bb45c825de7" Nov 1 00:48:28.530769 kubelet[2501]: E1101 00:48:28.530734 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c874fc74-qqjt8" podUID="30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d" Nov 1 00:48:29.531125 kubelet[2501]: E1101 00:48:29.531017 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-859bf66984-8h8hn" podUID="9cd09edd-44db-4b31-b369-b622badfedc3" Nov 1 00:48:33.530084 kubelet[2501]: E1101 00:48:33.530010 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-s7kdv" podUID="2f6bb8ca-d7c7-4d64-919c-e85097fdc068" Nov 1 00:48:33.530084 kubelet[2501]: E1101 00:48:33.530055 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-vws84" podUID="af4cf953-58b5-4727-a1f5-dcd340748032" Nov 1 00:48:37.532789 kubelet[2501]: E1101 00:48:37.532657 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpkjg" podUID="ce15fc81-d33c-45b3-b08a-5d312fb076f0" Nov 1 00:48:39.530914 kubelet[2501]: E1101 00:48:39.530776 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxfpm" podUID="135646f8-0c66-45b5-80ce-9bb45c825de7" Nov 1 00:48:39.531964 kubelet[2501]: E1101 00:48:39.531834 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c874fc74-qqjt8" podUID="30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d" Nov 1 00:48:41.528985 kubelet[2501]: E1101 00:48:41.528930 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-859bf66984-8h8hn" podUID="9cd09edd-44db-4b31-b369-b622badfedc3" Nov 1 00:48:45.530846 kubelet[2501]: E1101 00:48:45.530716 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-s7kdv" podUID="2f6bb8ca-d7c7-4d64-919c-e85097fdc068" Nov 1 00:48:46.529720 kubelet[2501]: E1101 00:48:46.529690 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-vws84" podUID="af4cf953-58b5-4727-a1f5-dcd340748032" Nov 1 00:48:50.533222 kubelet[2501]: E1101 00:48:50.533046 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxfpm" podUID="135646f8-0c66-45b5-80ce-9bb45c825de7" Nov 1 00:48:51.530620 kubelet[2501]: E1101 00:48:51.530579 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpkjg" podUID="ce15fc81-d33c-45b3-b08a-5d312fb076f0" Nov 1 00:48:52.530009 kubelet[2501]: E1101 00:48:52.529980 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-859bf66984-8h8hn" podUID="9cd09edd-44db-4b31-b369-b622badfedc3" Nov 1 00:48:54.530466 kubelet[2501]: E1101 00:48:54.530416 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c874fc74-qqjt8" podUID="30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d" Nov 1 00:48:56.531805 kubelet[2501]: E1101 00:48:56.531714 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-s7kdv" podUID="2f6bb8ca-d7c7-4d64-919c-e85097fdc068" Nov 1 00:48:56.862000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:48:56.906783 kernel: kauditd_printk_skb: 2 callbacks suppressed Nov 1 00:48:56.906896 kernel: audit: type=1400 audit(1761958136.862:1354): avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:48:56.862000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c002eba5a0 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:48:57.115959 kernel: audit: type=1300 audit(1761958136.862:1354): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c002eba5a0 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:48:57.116052 kernel: audit: type=1327 audit(1761958136.862:1354): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:48:56.862000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:48:57.208674 kernel: audit: type=1400 audit(1761958136.862:1355): avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sdb9" ino=520987 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:48:56.862000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sdb9" ino=520987 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:48:57.299385 kernel: audit: type=1300 audit(1761958136.862:1355): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c00293dbc0 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:48:56.862000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c00293dbc0 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:48:57.419923 kernel: audit: type=1327 audit(1761958136.862:1355): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:48:56.862000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:48:57.513196 kernel: audit: type=1400 audit(1761958136.949:1356): avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="sdb9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:48:56.949000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="sdb9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:48:57.603858 kernel: audit: type=1300 audit(1761958136.949:1356): arch=c000003e syscall=254 success=no exit=-13 a0=67 a1=c004bdade0 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:48:56.949000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=67 a1=c004bdade0 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:48:56.949000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:48:57.795486 kernel: audit: type=1327 audit(1761958136.949:1356): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:48:57.795549 kernel: audit: type=1400 audit(1761958136.949:1357): avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:48:56.949000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:48:56.949000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6b a1=c01043c9e0 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:48:56.949000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:48:56.952000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sdb9" ino=520987 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:48:56.952000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=67 a1=c00daa0b70 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:48:56.952000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:48:56.953000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sdb9" ino=520987 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:48:56.953000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=67 a1=c00daa0bd0 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:48:56.953000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:48:56.953000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="sdb9" ino=520989 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:48:56.953000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=67 a1=c008c46ff0 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:48:56.953000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:48:56.953000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:48:56.953000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=69 a1=c00581e300 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:48:56.953000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:48:57.908413 systemd[1]: Started sshd@9-145.40.82.49:22-198.235.24.64:49868.service. Nov 1 00:48:57.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-145.40.82.49:22-198.235.24.64:49868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:48:58.067635 sshd[6623]: kex_exchange_identification: Connection closed by remote host Nov 1 00:48:58.067635 sshd[6623]: Connection closed by 198.235.24.64 port 49868 Nov 1 00:48:58.069276 systemd[1]: sshd@9-145.40.82.49:22-198.235.24.64:49868.service: Deactivated successfully. Nov 1 00:48:58.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-145.40.82.49:22-198.235.24.64:49868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:49:01.531411 kubelet[2501]: E1101 00:49:01.531301 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-vws84" podUID="af4cf953-58b5-4727-a1f5-dcd340748032" Nov 1 00:49:02.531930 kubelet[2501]: E1101 00:49:02.531856 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpkjg" podUID="ce15fc81-d33c-45b3-b08a-5d312fb076f0" Nov 1 00:49:03.475000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:49:03.504042 kernel: kauditd_printk_skb: 16 callbacks suppressed Nov 1 00:49:03.504117 kernel: audit: type=1400 audit(1761958143.475:1364): avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:49:03.529258 kubelet[2501]: E1101 00:49:03.529239 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxfpm" podUID="135646f8-0c66-45b5-80ce-9bb45c825de7" Nov 1 00:49:03.475000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c001c31140 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:49:03.595569 kernel: audit: type=1300 audit(1761958143.475:1364): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c001c31140 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:49:03.475000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:49:03.807488 kernel: audit: type=1327 audit(1761958143.475:1364): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:49:03.807584 kernel: audit: type=1400 audit(1761958143.475:1365): avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:49:03.475000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:49:03.896994 kernel: audit: type=1300 audit(1761958143.475:1365): arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0017fb360 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:49:03.475000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0017fb360 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:49:04.017716 kernel: audit: type=1327 audit(1761958143.475:1365): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:49:03.475000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:49:03.510000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:49:04.201406 kernel: audit: type=1400 audit(1761958143.510:1366): avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:49:04.201485 kernel: audit: type=1300 audit(1761958143.510:1366): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0016e35c0 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:49:03.510000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0016e35c0 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:49:04.322197 kernel: audit: type=1327 audit(1761958143.510:1366): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:49:03.510000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:49:03.510000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:49:04.505897 kernel: audit: type=1400 audit(1761958143.510:1367): avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:49:03.510000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c001dbc580 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:49:03.510000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:49:05.529422 kubelet[2501]: E1101 00:49:05.529375 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-859bf66984-8h8hn" podUID="9cd09edd-44db-4b31-b369-b622badfedc3" Nov 1 00:49:05.529772 kubelet[2501]: E1101 00:49:05.529654 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c874fc74-qqjt8" podUID="30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d" Nov 1 00:49:11.531846 kubelet[2501]: E1101 00:49:11.531742 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-s7kdv" podUID="2f6bb8ca-d7c7-4d64-919c-e85097fdc068" Nov 1 00:49:13.529331 kubelet[2501]: E1101 00:49:13.529258 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-vws84" podUID="af4cf953-58b5-4727-a1f5-dcd340748032" Nov 1 00:49:16.530062 kubelet[2501]: E1101 00:49:16.530031 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-859bf66984-8h8hn" podUID="9cd09edd-44db-4b31-b369-b622badfedc3" Nov 1 00:49:16.530466 kubelet[2501]: E1101 00:49:16.530226 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpkjg" podUID="ce15fc81-d33c-45b3-b08a-5d312fb076f0" Nov 1 00:49:17.531140 kubelet[2501]: E1101 00:49:17.531045 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxfpm" podUID="135646f8-0c66-45b5-80ce-9bb45c825de7" Nov 1 00:49:17.532438 kubelet[2501]: E1101 00:49:17.532147 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c874fc74-qqjt8" podUID="30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d" Nov 1 00:49:22.531879 kubelet[2501]: E1101 00:49:22.531656 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-s7kdv" podUID="2f6bb8ca-d7c7-4d64-919c-e85097fdc068" Nov 1 00:49:24.531232 kubelet[2501]: E1101 00:49:24.531206 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-vws84" podUID="af4cf953-58b5-4727-a1f5-dcd340748032" Nov 1 00:49:28.531076 kubelet[2501]: E1101 00:49:28.530944 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxfpm" podUID="135646f8-0c66-45b5-80ce-9bb45c825de7" Nov 1 00:49:29.530358 kubelet[2501]: E1101 00:49:29.530327 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpkjg" podUID="ce15fc81-d33c-45b3-b08a-5d312fb076f0" Nov 1 00:49:30.533834 kubelet[2501]: E1101 00:49:30.533727 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c874fc74-qqjt8" podUID="30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d" Nov 1 00:49:31.529797 kubelet[2501]: E1101 00:49:31.529740 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-859bf66984-8h8hn" podUID="9cd09edd-44db-4b31-b369-b622badfedc3" Nov 1 00:49:35.531469 kubelet[2501]: E1101 00:49:35.531356 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-vws84" podUID="af4cf953-58b5-4727-a1f5-dcd340748032" Nov 1 00:49:37.529403 kubelet[2501]: E1101 00:49:37.529375 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-s7kdv" podUID="2f6bb8ca-d7c7-4d64-919c-e85097fdc068" Nov 1 00:49:39.531061 kubelet[2501]: E1101 00:49:39.530959 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxfpm" podUID="135646f8-0c66-45b5-80ce-9bb45c825de7" Nov 1 00:49:41.532420 kubelet[2501]: E1101 00:49:41.532301 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpkjg" podUID="ce15fc81-d33c-45b3-b08a-5d312fb076f0" Nov 1 00:49:43.531478 kubelet[2501]: E1101 00:49:43.531344 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-859bf66984-8h8hn" podUID="9cd09edd-44db-4b31-b369-b622badfedc3" Nov 1 00:49:45.531009 kubelet[2501]: E1101 00:49:45.530974 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c874fc74-qqjt8" podUID="30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d" Nov 1 00:49:47.531689 kubelet[2501]: E1101 00:49:47.531609 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-vws84" podUID="af4cf953-58b5-4727-a1f5-dcd340748032" Nov 1 00:49:50.529713 kubelet[2501]: E1101 00:49:50.529684 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxfpm" podUID="135646f8-0c66-45b5-80ce-9bb45c825de7" Nov 1 00:49:52.531738 kubelet[2501]: E1101 00:49:52.531614 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-s7kdv" podUID="2f6bb8ca-d7c7-4d64-919c-e85097fdc068" Nov 1 00:49:54.533099 kubelet[2501]: E1101 00:49:54.532964 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpkjg" podUID="ce15fc81-d33c-45b3-b08a-5d312fb076f0" Nov 1 00:49:56.532114 kubelet[2501]: E1101 00:49:56.532078 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-859bf66984-8h8hn" podUID="9cd09edd-44db-4b31-b369-b622badfedc3" Nov 1 00:49:56.864000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:49:56.891606 kernel: kauditd_printk_skb: 2 callbacks suppressed Nov 1 00:49:56.891696 kernel: audit: type=1400 audit(1761958196.864:1368): avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:49:56.864000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0017006e0 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:49:57.100785 kernel: audit: type=1300 audit(1761958196.864:1368): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0017006e0 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:49:57.100859 kernel: audit: type=1327 audit(1761958196.864:1368): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:49:56.864000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:49:56.864000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sdb9" ino=520987 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:49:57.284142 kernel: audit: type=1400 audit(1761958196.864:1369): avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sdb9" ino=520987 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:49:57.284211 kernel: audit: type=1300 audit(1761958196.864:1369): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c002bf0210 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:49:56.864000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c002bf0210 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:49:56.864000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:49:57.497995 kernel: audit: type=1327 audit(1761958196.864:1369): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:49:57.498081 kernel: audit: type=1400 audit(1761958196.950:1370): avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:49:56.950000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:49:56.950000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=67 a1=c005c35dc0 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:49:57.686651 kernel: audit: type=1300 audit(1761958196.950:1370): arch=c000003e syscall=254 success=no exit=-13 a0=67 a1=c005c35dc0 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:49:57.686704 kernel: audit: type=1327 audit(1761958196.950:1370): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:49:56.950000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:49:57.779967 kernel: audit: type=1400 audit(1761958196.950:1371): avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="sdb9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:49:56.950000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="sdb9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:49:56.950000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=67 a1=c00e20e450 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:49:56.950000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:49:56.954000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="sdb9" ino=520989 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:49:56.954000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=67 a1=c00e9c9170 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:49:56.954000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:49:56.954000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sdb9" ino=520987 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:49:56.954000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=67 a1=c00e9c91a0 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:49:56.954000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:49:56.955000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:49:56.955000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=67 a1=c008aafbe0 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:49:56.955000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:49:56.955000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sdb9" ino=520987 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:49:56.955000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=67 a1=c00e9c9230 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:49:56.955000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:49:58.530269 kubelet[2501]: E1101 00:49:58.530218 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c874fc74-qqjt8" podUID="30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d" Nov 1 00:50:01.529129 kubelet[2501]: E1101 00:50:01.529063 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-vws84" podUID="af4cf953-58b5-4727-a1f5-dcd340748032" Nov 1 00:50:03.476000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:50:03.517561 kernel: kauditd_printk_skb: 14 callbacks suppressed Nov 1 00:50:03.517676 kernel: audit: type=1400 audit(1761958203.476:1376): avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:50:03.529627 kubelet[2501]: E1101 00:50:03.529600 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-s7kdv" podUID="2f6bb8ca-d7c7-4d64-919c-e85097fdc068" Nov 1 00:50:03.529627 kubelet[2501]: E1101 00:50:03.529593 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxfpm" podUID="135646f8-0c66-45b5-80ce-9bb45c825de7" Nov 1 00:50:03.476000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c001700a60 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:50:03.726578 kernel: audit: type=1300 audit(1761958203.476:1376): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c001700a60 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:50:03.726648 kernel: audit: type=1327 audit(1761958203.476:1376): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:50:03.476000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:50:03.819168 kernel: audit: type=1400 audit(1761958203.476:1377): avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:50:03.476000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:50:03.908774 kernel: audit: type=1300 audit(1761958203.476:1377): arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000e9e000 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:50:03.476000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000e9e000 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:50:04.029581 kernel: audit: type=1327 audit(1761958203.476:1377): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:50:03.476000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:50:04.123065 kernel: audit: type=1400 audit(1761958203.512:1378): avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:50:03.512000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:50:04.213399 kernel: audit: type=1300 audit(1761958203.512:1378): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0009e7580 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:50:03.512000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0009e7580 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:50:04.334250 kernel: audit: type=1327 audit(1761958203.512:1378): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:50:03.512000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:50:04.427806 kernel: audit: type=1400 audit(1761958203.512:1379): avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:50:03.512000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:50:03.512000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c000e9e020 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:50:03.512000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:50:05.532627 kubelet[2501]: E1101 00:50:05.532508 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpkjg" podUID="ce15fc81-d33c-45b3-b08a-5d312fb076f0" Nov 1 00:50:11.531591 kubelet[2501]: E1101 00:50:11.531476 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-859bf66984-8h8hn" podUID="9cd09edd-44db-4b31-b369-b622badfedc3" Nov 1 00:50:12.529506 kubelet[2501]: E1101 00:50:12.529473 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-vws84" podUID="af4cf953-58b5-4727-a1f5-dcd340748032" Nov 1 00:50:13.533141 kubelet[2501]: E1101 00:50:13.533001 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c874fc74-qqjt8" podUID="30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d" Nov 1 00:50:16.531004 kubelet[2501]: E1101 00:50:16.530892 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-s7kdv" podUID="2f6bb8ca-d7c7-4d64-919c-e85097fdc068" Nov 1 00:50:16.532328 env[1565]: time="2025-11-01T00:50:16.531601217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:50:16.914356 env[1565]: time="2025-11-01T00:50:16.914219178Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:50:16.915253 env[1565]: time="2025-11-01T00:50:16.915122851Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:50:16.915745 kubelet[2501]: E1101 00:50:16.915621 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:50:16.915745 kubelet[2501]: E1101 00:50:16.915729 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:50:16.916259 kubelet[2501]: E1101 00:50:16.916078 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vfh6h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-bxfpm_calico-system(135646f8-0c66-45b5-80ce-9bb45c825de7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:50:16.917565 kubelet[2501]: E1101 00:50:16.917454 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxfpm" podUID="135646f8-0c66-45b5-80ce-9bb45c825de7" Nov 1 00:50:20.530688 env[1565]: time="2025-11-01T00:50:20.530661889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:50:20.899925 env[1565]: time="2025-11-01T00:50:20.899666873Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:50:20.900571 env[1565]: time="2025-11-01T00:50:20.900444431Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:50:20.900965 kubelet[2501]: E1101 00:50:20.900859 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:50:20.901780 kubelet[2501]: E1101 00:50:20.900983 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:50:20.901780 kubelet[2501]: E1101 00:50:20.901261 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l4wc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qpkjg_calico-system(ce15fc81-d33c-45b3-b08a-5d312fb076f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:50:20.904156 env[1565]: time="2025-11-01T00:50:20.904039819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:50:21.283410 env[1565]: time="2025-11-01T00:50:21.283268056Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:50:21.284193 env[1565]: time="2025-11-01T00:50:21.284086915Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:50:21.284646 kubelet[2501]: E1101 00:50:21.284524 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:50:21.284646 kubelet[2501]: E1101 00:50:21.284628 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:50:21.285034 kubelet[2501]: E1101 00:50:21.284929 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l4wc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qpkjg_calico-system(ce15fc81-d33c-45b3-b08a-5d312fb076f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:50:21.286462 kubelet[2501]: E1101 00:50:21.286337 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpkjg" podUID="ce15fc81-d33c-45b3-b08a-5d312fb076f0" Nov 1 00:50:23.532027 env[1565]: time="2025-11-01T00:50:23.531873266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:50:23.915194 env[1565]: time="2025-11-01T00:50:23.915139778Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:50:23.915632 env[1565]: time="2025-11-01T00:50:23.915583905Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:50:23.915760 kubelet[2501]: E1101 00:50:23.915737 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:50:23.915920 kubelet[2501]: E1101 00:50:23.915769 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:50:23.915920 kubelet[2501]: E1101 00:50:23.915845 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-48hx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d69b6c6c-vws84_calico-apiserver(af4cf953-58b5-4727-a1f5-dcd340748032): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:50:23.917024 kubelet[2501]: E1101 00:50:23.916973 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-vws84" podUID="af4cf953-58b5-4727-a1f5-dcd340748032" Nov 1 00:50:24.531942 env[1565]: time="2025-11-01T00:50:24.531815751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:50:24.892540 env[1565]: time="2025-11-01T00:50:24.892263536Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:50:24.893469 env[1565]: time="2025-11-01T00:50:24.893009112Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:50:24.893665 kubelet[2501]: E1101 00:50:24.893468 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:50:24.893665 kubelet[2501]: E1101 00:50:24.893599 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:50:24.894052 kubelet[2501]: E1101 00:50:24.893882 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p47qt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-859bf66984-8h8hn_calico-system(9cd09edd-44db-4b31-b369-b622badfedc3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:50:24.895290 kubelet[2501]: E1101 00:50:24.895201 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-859bf66984-8h8hn" podUID="9cd09edd-44db-4b31-b369-b622badfedc3" Nov 1 00:50:26.530372 env[1565]: time="2025-11-01T00:50:26.530278995Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:50:26.907272 env[1565]: time="2025-11-01T00:50:26.907153259Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:50:26.907606 env[1565]: time="2025-11-01T00:50:26.907562005Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:50:26.907820 kubelet[2501]: E1101 00:50:26.907753 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:50:26.907820 kubelet[2501]: E1101 00:50:26.907804 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:50:26.908133 kubelet[2501]: E1101 00:50:26.907902 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:03c5ae372c8f45a08ebfcb08205f19e3,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-slg5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84c874fc74-qqjt8_calico-system(30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:50:26.909623 env[1565]: time="2025-11-01T00:50:26.909600168Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:50:27.263338 env[1565]: time="2025-11-01T00:50:27.263279034Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:50:27.263828 env[1565]: time="2025-11-01T00:50:27.263804774Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:50:27.264050 kubelet[2501]: E1101 00:50:27.264010 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:50:27.264096 kubelet[2501]: E1101 00:50:27.264061 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:50:27.264168 kubelet[2501]: E1101 00:50:27.264147 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-slg5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84c874fc74-qqjt8_calico-system(30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:50:27.265334 kubelet[2501]: E1101 00:50:27.265318 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c874fc74-qqjt8" podUID="30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d" Nov 1 00:50:30.531662 env[1565]: time="2025-11-01T00:50:30.531624491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:50:30.895834 env[1565]: time="2025-11-01T00:50:30.895731162Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:50:30.896201 env[1565]: time="2025-11-01T00:50:30.896154209Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:50:30.896375 kubelet[2501]: E1101 00:50:30.896333 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:50:30.896659 kubelet[2501]: E1101 00:50:30.896384 2501 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:50:30.896659 kubelet[2501]: E1101 00:50:30.896504 2501 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8pvv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d69b6c6c-s7kdv_calico-apiserver(2f6bb8ca-d7c7-4d64-919c-e85097fdc068): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:50:30.897722 kubelet[2501]: E1101 00:50:30.897650 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-s7kdv" podUID="2f6bb8ca-d7c7-4d64-919c-e85097fdc068" Nov 1 00:50:31.531384 kubelet[2501]: E1101 00:50:31.531277 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxfpm" podUID="135646f8-0c66-45b5-80ce-9bb45c825de7" Nov 1 00:50:36.530591 kubelet[2501]: E1101 00:50:36.530547 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-vws84" podUID="af4cf953-58b5-4727-a1f5-dcd340748032" Nov 1 00:50:36.531374 kubelet[2501]: E1101 00:50:36.531353 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpkjg" podUID="ce15fc81-d33c-45b3-b08a-5d312fb076f0" Nov 1 00:50:37.514104 systemd[1]: Started sshd@10-145.40.82.49:22-147.75.109.163:33718.service. Nov 1 00:50:37.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-145.40.82.49:22-147.75.109.163:33718 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:50:37.541216 kernel: kauditd_printk_skb: 2 callbacks suppressed Nov 1 00:50:37.541324 kernel: audit: type=1130 audit(1761958237.513:1380): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-145.40.82.49:22-147.75.109.163:33718 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:50:37.655000 audit[6806]: USER_ACCT pid=6806 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:37.656294 sshd[6806]: Accepted publickey for core from 147.75.109.163 port 33718 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 00:50:37.660075 sshd[6806]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:50:37.668858 systemd-logind[1557]: New session 12 of user core. Nov 1 00:50:37.670542 systemd[1]: Started session-12.scope. Nov 1 00:50:37.658000 audit[6806]: CRED_ACQ pid=6806 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:37.759523 sshd[6806]: pam_unix(sshd:session): session closed for user core Nov 1 00:50:37.761064 systemd[1]: sshd@10-145.40.82.49:22-147.75.109.163:33718.service: Deactivated successfully. Nov 1 00:50:37.761512 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 00:50:37.761932 systemd-logind[1557]: Session 12 logged out. Waiting for processes to exit. Nov 1 00:50:37.762419 systemd-logind[1557]: Removed session 12. Nov 1 00:50:37.838710 kernel: audit: type=1101 audit(1761958237.655:1381): pid=6806 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:37.838777 kernel: audit: type=1103 audit(1761958237.658:1382): pid=6806 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:37.838795 kernel: audit: type=1006 audit(1761958237.658:1383): pid=6806 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Nov 1 00:50:37.897458 kernel: audit: type=1300 audit(1761958237.658:1383): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeb08c52a0 a2=3 a3=0 items=0 ppid=1 pid=6806 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:50:37.658000 audit[6806]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeb08c52a0 a2=3 a3=0 items=0 ppid=1 pid=6806 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:50:37.990232 kernel: audit: type=1327 audit(1761958237.658:1383): proctitle=737368643A20636F7265205B707269765D Nov 1 00:50:37.658000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:50:38.021112 kernel: audit: type=1105 audit(1761958237.676:1384): pid=6806 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:37.676000 audit[6806]: USER_START pid=6806 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:38.116773 kernel: audit: type=1103 audit(1761958237.678:1385): pid=6808 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:37.678000 audit[6808]: CRED_ACQ pid=6808 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:37.759000 audit[6806]: USER_END pid=6806 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:38.302229 kernel: audit: type=1106 audit(1761958237.759:1386): pid=6806 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:38.302314 kernel: audit: type=1104 audit(1761958237.759:1387): pid=6806 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:37.759000 audit[6806]: CRED_DISP pid=6806 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:37.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-145.40.82.49:22-147.75.109.163:33718 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:50:39.531138 kubelet[2501]: E1101 00:50:39.531052 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-859bf66984-8h8hn" podUID="9cd09edd-44db-4b31-b369-b622badfedc3" Nov 1 00:50:39.532512 kubelet[2501]: E1101 00:50:39.531965 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c874fc74-qqjt8" podUID="30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d" Nov 1 00:50:42.771999 systemd[1]: Started sshd@11-145.40.82.49:22-147.75.109.163:48868.service. Nov 1 00:50:42.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-145.40.82.49:22-147.75.109.163:48868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:50:42.799480 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:50:42.799580 kernel: audit: type=1130 audit(1761958242.771:1389): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-145.40.82.49:22-147.75.109.163:48868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:50:42.914000 audit[6843]: USER_ACCT pid=6843 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:42.915584 sshd[6843]: Accepted publickey for core from 147.75.109.163 port 48868 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 00:50:42.920252 sshd[6843]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:50:42.929955 systemd-logind[1557]: New session 13 of user core. Nov 1 00:50:42.932331 systemd[1]: Started session-13.scope. Nov 1 00:50:42.918000 audit[6843]: CRED_ACQ pid=6843 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:43.019572 sshd[6843]: pam_unix(sshd:session): session closed for user core Nov 1 00:50:43.021089 systemd[1]: sshd@11-145.40.82.49:22-147.75.109.163:48868.service: Deactivated successfully. Nov 1 00:50:43.021711 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 00:50:43.022061 systemd-logind[1557]: Session 13 logged out. Waiting for processes to exit. Nov 1 00:50:43.022469 systemd-logind[1557]: Removed session 13. Nov 1 00:50:43.097576 kernel: audit: type=1101 audit(1761958242.914:1390): pid=6843 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:43.097661 kernel: audit: type=1103 audit(1761958242.918:1391): pid=6843 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:43.097682 kernel: audit: type=1006 audit(1761958242.918:1392): pid=6843 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Nov 1 00:50:43.156122 kernel: audit: type=1300 audit(1761958242.918:1392): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff911932a0 a2=3 a3=0 items=0 ppid=1 pid=6843 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:50:42.918000 audit[6843]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff911932a0 a2=3 a3=0 items=0 ppid=1 pid=6843 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:50:43.247991 kernel: audit: type=1327 audit(1761958242.918:1392): proctitle=737368643A20636F7265205B707269765D Nov 1 00:50:42.918000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:50:42.940000 audit[6843]: USER_START pid=6843 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:43.373256 kernel: audit: type=1105 audit(1761958242.940:1393): pid=6843 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:43.373332 kernel: audit: type=1103 audit(1761958242.942:1394): pid=6845 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:42.942000 audit[6845]: CRED_ACQ pid=6845 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:43.462652 kernel: audit: type=1106 audit(1761958243.019:1395): pid=6843 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:43.019000 audit[6843]: USER_END pid=6843 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:43.558325 kernel: audit: type=1104 audit(1761958243.019:1396): pid=6843 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:43.019000 audit[6843]: CRED_DISP pid=6843 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:43.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-145.40.82.49:22-147.75.109.163:48868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:50:44.529607 kubelet[2501]: E1101 00:50:44.529518 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxfpm" podUID="135646f8-0c66-45b5-80ce-9bb45c825de7" Nov 1 00:50:45.531178 kubelet[2501]: E1101 00:50:45.531076 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-s7kdv" podUID="2f6bb8ca-d7c7-4d64-919c-e85097fdc068" Nov 1 00:50:48.022771 systemd[1]: Started sshd@12-145.40.82.49:22-147.75.109.163:48874.service. Nov 1 00:50:48.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-145.40.82.49:22-147.75.109.163:48874 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:50:48.049867 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:50:48.050010 kernel: audit: type=1130 audit(1761958248.022:1398): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-145.40.82.49:22-147.75.109.163:48874 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:50:48.162000 audit[6877]: USER_ACCT pid=6877 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:48.162876 sshd[6877]: Accepted publickey for core from 147.75.109.163 port 48874 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 00:50:48.163963 sshd[6877]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:50:48.166129 systemd-logind[1557]: New session 14 of user core. Nov 1 00:50:48.166996 systemd[1]: Started session-14.scope. Nov 1 00:50:48.248206 sshd[6877]: pam_unix(sshd:session): session closed for user core Nov 1 00:50:48.249571 systemd[1]: sshd@12-145.40.82.49:22-147.75.109.163:48874.service: Deactivated successfully. Nov 1 00:50:48.250035 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 00:50:48.250356 systemd-logind[1557]: Session 14 logged out. Waiting for processes to exit. Nov 1 00:50:48.250823 systemd-logind[1557]: Removed session 14. Nov 1 00:50:48.163000 audit[6877]: CRED_ACQ pid=6877 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:48.345091 kernel: audit: type=1101 audit(1761958248.162:1399): pid=6877 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:48.345165 kernel: audit: type=1103 audit(1761958248.163:1400): pid=6877 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:48.345187 kernel: audit: type=1006 audit(1761958248.163:1401): pid=6877 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Nov 1 00:50:48.163000 audit[6877]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe8ee36580 a2=3 a3=0 items=0 ppid=1 pid=6877 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:50:48.495843 kernel: audit: type=1300 audit(1761958248.163:1401): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe8ee36580 a2=3 a3=0 items=0 ppid=1 pid=6877 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:50:48.495916 kernel: audit: type=1327 audit(1761958248.163:1401): proctitle=737368643A20636F7265205B707269765D Nov 1 00:50:48.163000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:50:48.169000 audit[6877]: USER_START pid=6877 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:48.529221 kubelet[2501]: E1101 00:50:48.529196 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-vws84" podUID="af4cf953-58b5-4727-a1f5-dcd340748032" Nov 1 00:50:48.621044 kernel: audit: type=1105 audit(1761958248.169:1402): pid=6877 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:48.621123 kernel: audit: type=1103 audit(1761958248.170:1403): pid=6879 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:48.170000 audit[6879]: CRED_ACQ pid=6879 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:48.710397 kernel: audit: type=1106 audit(1761958248.248:1404): pid=6877 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:48.248000 audit[6877]: USER_END pid=6877 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:48.806028 kernel: audit: type=1104 audit(1761958248.248:1405): pid=6877 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:48.248000 audit[6877]: CRED_DISP pid=6877 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:48.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-145.40.82.49:22-147.75.109.163:48874 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:50:51.530009 kubelet[2501]: E1101 00:50:51.529966 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpkjg" podUID="ce15fc81-d33c-45b3-b08a-5d312fb076f0" Nov 1 00:50:52.530580 kubelet[2501]: E1101 00:50:52.530519 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c874fc74-qqjt8" podUID="30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d" Nov 1 00:50:53.260108 systemd[1]: Started sshd@13-145.40.82.49:22-147.75.109.163:46426.service. Nov 1 00:50:53.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-145.40.82.49:22-147.75.109.163:46426 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:50:53.302357 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:50:53.302449 kernel: audit: type=1130 audit(1761958253.260:1407): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-145.40.82.49:22-147.75.109.163:46426 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:50:53.416000 audit[6904]: USER_ACCT pid=6904 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:53.416758 sshd[6904]: Accepted publickey for core from 147.75.109.163 port 46426 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 00:50:53.418146 sshd[6904]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:50:53.420378 systemd-logind[1557]: New session 15 of user core. Nov 1 00:50:53.420975 systemd[1]: Started session-15.scope. Nov 1 00:50:53.501237 sshd[6904]: pam_unix(sshd:session): session closed for user core Nov 1 00:50:53.503088 systemd[1]: sshd@13-145.40.82.49:22-147.75.109.163:46426.service: Deactivated successfully. Nov 1 00:50:53.503453 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 00:50:53.503829 systemd-logind[1557]: Session 15 logged out. Waiting for processes to exit. Nov 1 00:50:53.505645 systemd[1]: Started sshd@14-145.40.82.49:22-147.75.109.163:46436.service. Nov 1 00:50:53.506220 systemd-logind[1557]: Removed session 15. Nov 1 00:50:53.417000 audit[6904]: CRED_ACQ pid=6904 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:53.530060 kubelet[2501]: E1101 00:50:53.530006 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-859bf66984-8h8hn" podUID="9cd09edd-44db-4b31-b369-b622badfedc3" Nov 1 00:50:53.598778 kernel: audit: type=1101 audit(1761958253.416:1408): pid=6904 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:53.598857 kernel: audit: type=1103 audit(1761958253.417:1409): pid=6904 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:53.598879 kernel: audit: type=1006 audit(1761958253.417:1410): pid=6904 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Nov 1 00:50:53.657405 kernel: audit: type=1300 audit(1761958253.417:1410): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe488c3e0 a2=3 a3=0 items=0 ppid=1 pid=6904 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:50:53.417000 audit[6904]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe488c3e0 a2=3 a3=0 items=0 ppid=1 pid=6904 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:50:53.681978 sshd[6930]: Accepted publickey for core from 147.75.109.163 port 46436 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 00:50:53.682760 sshd[6930]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:50:53.685129 systemd-logind[1557]: New session 16 of user core. Nov 1 00:50:53.685651 systemd[1]: Started session-16.scope. Nov 1 00:50:53.417000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:50:53.779959 kernel: audit: type=1327 audit(1761958253.417:1410): proctitle=737368643A20636F7265205B707269765D Nov 1 00:50:53.780050 kernel: audit: type=1105 audit(1761958253.422:1411): pid=6904 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:53.422000 audit[6904]: USER_START pid=6904 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:53.787753 sshd[6930]: pam_unix(sshd:session): session closed for user core Nov 1 00:50:53.789833 systemd[1]: sshd@14-145.40.82.49:22-147.75.109.163:46436.service: Deactivated successfully. Nov 1 00:50:53.790216 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 00:50:53.790528 systemd-logind[1557]: Session 16 logged out. Waiting for processes to exit. Nov 1 00:50:53.791158 systemd[1]: Started sshd@15-145.40.82.49:22-147.75.109.163:46444.service. Nov 1 00:50:53.791585 systemd-logind[1557]: Removed session 16. Nov 1 00:50:53.423000 audit[6906]: CRED_ACQ pid=6906 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:53.963791 kernel: audit: type=1103 audit(1761958253.423:1412): pid=6906 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:53.963835 kernel: audit: type=1106 audit(1761958253.501:1413): pid=6904 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:53.501000 audit[6904]: USER_END pid=6904 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:53.988474 sshd[6953]: Accepted publickey for core from 147.75.109.163 port 46444 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 00:50:53.989862 sshd[6953]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:50:53.992174 systemd-logind[1557]: New session 17 of user core. Nov 1 00:50:53.992690 systemd[1]: Started session-17.scope. Nov 1 00:50:53.501000 audit[6904]: CRED_DISP pid=6904 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:54.073244 sshd[6953]: pam_unix(sshd:session): session closed for user core Nov 1 00:50:54.074777 systemd[1]: sshd@15-145.40.82.49:22-147.75.109.163:46444.service: Deactivated successfully. Nov 1 00:50:54.075408 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 00:50:54.075798 systemd-logind[1557]: Session 17 logged out. Waiting for processes to exit. Nov 1 00:50:54.076210 systemd-logind[1557]: Removed session 17. Nov 1 00:50:54.148759 kernel: audit: type=1104 audit(1761958253.501:1414): pid=6904 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:53.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-145.40.82.49:22-147.75.109.163:46426 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:50:53.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-145.40.82.49:22-147.75.109.163:46436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:50:53.681000 audit[6930]: USER_ACCT pid=6930 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:53.682000 audit[6930]: CRED_ACQ pid=6930 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:53.682000 audit[6930]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc3b0ba980 a2=3 a3=0 items=0 ppid=1 pid=6930 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:50:53.682000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:50:53.687000 audit[6930]: USER_START pid=6930 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:53.688000 audit[6932]: CRED_ACQ pid=6932 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:53.787000 audit[6930]: USER_END pid=6930 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:53.787000 audit[6930]: CRED_DISP pid=6930 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:53.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-145.40.82.49:22-147.75.109.163:46436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:50:53.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-145.40.82.49:22-147.75.109.163:46444 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:50:53.987000 audit[6953]: USER_ACCT pid=6953 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:53.989000 audit[6953]: CRED_ACQ pid=6953 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:53.989000 audit[6953]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe2a0b1cb0 a2=3 a3=0 items=0 ppid=1 pid=6953 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:50:53.989000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:50:53.994000 audit[6953]: USER_START pid=6953 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:53.995000 audit[6955]: CRED_ACQ pid=6955 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:54.073000 audit[6953]: USER_END pid=6953 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:54.073000 audit[6953]: CRED_DISP pid=6953 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:54.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-145.40.82.49:22-147.75.109.163:46444 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:50:56.537314 kubelet[2501]: E1101 00:50:56.537196 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxfpm" podUID="135646f8-0c66-45b5-80ce-9bb45c825de7" Nov 1 00:50:56.865000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:50:56.865000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sdb9" ino=520987 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:50:56.865000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0019f0340 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:50:56.865000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:50:56.865000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c002a54990 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:50:56.865000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:50:56.950000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:50:56.950000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6d a1=c01201f2c0 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:50:56.950000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:50:56.950000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="sdb9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:50:56.950000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=70 a1=c009809ce0 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:50:56.950000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:50:56.955000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:50:56.955000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sdb9" ino=520987 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:50:56.955000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="sdb9" ino=520989 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:50:56.955000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=70 a1=c00778c780 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:50:56.955000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6d a1=c00fa275a0 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:50:56.955000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:50:56.955000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:50:56.955000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=72 a1=c01287ea80 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:50:56.955000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:50:56.955000 audit[2326]: AVC avc: denied { watch } for pid=2326 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sdb9" ino=520987 scontext=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:50:56.955000 audit[2326]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=73 a1=c009809d10 a2=fc6 a3=0 items=0 ppid=2157 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c375,c603 key=(null) Nov 1 00:50:56.955000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3134352E34302E38322E3439002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Nov 1 00:50:58.536759 kubelet[2501]: E1101 00:50:58.536664 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-s7kdv" podUID="2f6bb8ca-d7c7-4d64-919c-e85097fdc068" Nov 1 00:50:59.076877 systemd[1]: Started sshd@16-145.40.82.49:22-147.75.109.163:46456.service. Nov 1 00:50:59.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-145.40.82.49:22-147.75.109.163:46456 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:50:59.103934 kernel: kauditd_printk_skb: 47 callbacks suppressed Nov 1 00:50:59.104030 kernel: audit: type=1130 audit(1761958259.075:1442): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-145.40.82.49:22-147.75.109.163:46456 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:50:59.216000 audit[7000]: USER_ACCT pid=7000 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:59.217585 sshd[7000]: Accepted publickey for core from 147.75.109.163 port 46456 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 00:50:59.218953 sshd[7000]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:50:59.221198 systemd-logind[1557]: New session 18 of user core. Nov 1 00:50:59.221946 systemd[1]: Started session-18.scope. Nov 1 00:50:59.302296 sshd[7000]: pam_unix(sshd:session): session closed for user core Nov 1 00:50:59.303668 systemd[1]: sshd@16-145.40.82.49:22-147.75.109.163:46456.service: Deactivated successfully. Nov 1 00:50:59.304155 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 00:50:59.304518 systemd-logind[1557]: Session 18 logged out. Waiting for processes to exit. Nov 1 00:50:59.304931 systemd-logind[1557]: Removed session 18. Nov 1 00:50:59.218000 audit[7000]: CRED_ACQ pid=7000 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:59.399538 kernel: audit: type=1101 audit(1761958259.216:1443): pid=7000 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:59.399627 kernel: audit: type=1103 audit(1761958259.218:1444): pid=7000 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:59.399646 kernel: audit: type=1006 audit(1761958259.218:1445): pid=7000 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Nov 1 00:50:59.218000 audit[7000]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffff8083490 a2=3 a3=0 items=0 ppid=1 pid=7000 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:50:59.550275 kernel: audit: type=1300 audit(1761958259.218:1445): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffff8083490 a2=3 a3=0 items=0 ppid=1 pid=7000 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:50:59.550360 kernel: audit: type=1327 audit(1761958259.218:1445): proctitle=737368643A20636F7265205B707269765D Nov 1 00:50:59.218000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:50:59.580819 kernel: audit: type=1105 audit(1761958259.222:1446): pid=7000 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:59.222000 audit[7000]: USER_START pid=7000 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:59.675342 kernel: audit: type=1103 audit(1761958259.223:1447): pid=7002 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:59.223000 audit[7002]: CRED_ACQ pid=7002 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:59.764634 kernel: audit: type=1106 audit(1761958259.301:1448): pid=7000 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:59.301000 audit[7000]: USER_END pid=7000 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:59.860238 kernel: audit: type=1104 audit(1761958259.301:1449): pid=7000 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:59.301000 audit[7000]: CRED_DISP pid=7000 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:50:59.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-145.40.82.49:22-147.75.109.163:46456 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:51:01.531340 kubelet[2501]: E1101 00:51:01.531241 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-vws84" podUID="af4cf953-58b5-4727-a1f5-dcd340748032" Nov 1 00:51:03.477000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:51:03.477000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c001c30ac0 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:51:03.477000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:51:03.477000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:51:03.477000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c0019647e0 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:51:03.477000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:51:03.512000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:51:03.512000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c001c30c60 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:51:03.512000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:51:03.512000 audit[2319]: AVC avc: denied { watch } for pid=2319 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sdb9" ino=520981 scontext=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 00:51:03.512000 audit[2319]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0017fb340 a2=fc6 a3=0 items=0 ppid=2186 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c328,c532 key=(null) Nov 1 00:51:03.512000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 00:51:03.532676 kubelet[2501]: E1101 00:51:03.532575 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpkjg" podUID="ce15fc81-d33c-45b3-b08a-5d312fb076f0" Nov 1 00:51:04.314350 systemd[1]: Started sshd@17-145.40.82.49:22-147.75.109.163:60056.service. Nov 1 00:51:04.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-145.40.82.49:22-147.75.109.163:60056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:51:04.356782 kernel: kauditd_printk_skb: 13 callbacks suppressed Nov 1 00:51:04.356894 kernel: audit: type=1130 audit(1761958264.313:1455): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-145.40.82.49:22-147.75.109.163:60056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:51:04.469000 audit[7029]: USER_ACCT pid=7029 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:04.470671 sshd[7029]: Accepted publickey for core from 147.75.109.163 port 60056 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 00:51:04.473922 sshd[7029]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:51:04.481567 systemd-logind[1557]: New session 19 of user core. Nov 1 00:51:04.484366 systemd[1]: Started session-19.scope. Nov 1 00:51:04.472000 audit[7029]: CRED_ACQ pid=7029 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:04.568831 sshd[7029]: pam_unix(sshd:session): session closed for user core Nov 1 00:51:04.570193 systemd[1]: sshd@17-145.40.82.49:22-147.75.109.163:60056.service: Deactivated successfully. Nov 1 00:51:04.570659 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 00:51:04.571042 systemd-logind[1557]: Session 19 logged out. Waiting for processes to exit. Nov 1 00:51:04.571460 systemd-logind[1557]: Removed session 19. Nov 1 00:51:04.652635 kernel: audit: type=1101 audit(1761958264.469:1456): pid=7029 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:04.652706 kernel: audit: type=1103 audit(1761958264.472:1457): pid=7029 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:04.652734 kernel: audit: type=1006 audit(1761958264.472:1458): pid=7029 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Nov 1 00:51:04.472000 audit[7029]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff762189e0 a2=3 a3=0 items=0 ppid=1 pid=7029 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:51:04.803498 kernel: audit: type=1300 audit(1761958264.472:1458): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff762189e0 a2=3 a3=0 items=0 ppid=1 pid=7029 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:51:04.803575 kernel: audit: type=1327 audit(1761958264.472:1458): proctitle=737368643A20636F7265205B707269765D Nov 1 00:51:04.472000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:51:04.489000 audit[7029]: USER_START pid=7029 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:04.928715 kernel: audit: type=1105 audit(1761958264.489:1459): pid=7029 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:04.928791 kernel: audit: type=1103 audit(1761958264.491:1460): pid=7031 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:04.491000 audit[7031]: CRED_ACQ pid=7031 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:05.018121 kernel: audit: type=1106 audit(1761958264.567:1461): pid=7029 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:04.567000 audit[7029]: USER_END pid=7029 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:05.113766 kernel: audit: type=1104 audit(1761958264.567:1462): pid=7029 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:04.567000 audit[7029]: CRED_DISP pid=7029 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:04.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-145.40.82.49:22-147.75.109.163:60056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:51:06.529917 kubelet[2501]: E1101 00:51:06.529865 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-859bf66984-8h8hn" podUID="9cd09edd-44db-4b31-b369-b622badfedc3" Nov 1 00:51:07.530431 kubelet[2501]: E1101 00:51:07.530345 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c874fc74-qqjt8" podUID="30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d" Nov 1 00:51:08.529471 kubelet[2501]: E1101 00:51:08.529417 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxfpm" podUID="135646f8-0c66-45b5-80ce-9bb45c825de7" Nov 1 00:51:09.581929 systemd[1]: Started sshd@18-145.40.82.49:22-147.75.109.163:60068.service. Nov 1 00:51:09.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-145.40.82.49:22-147.75.109.163:60068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:51:09.617034 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:51:09.617117 kernel: audit: type=1130 audit(1761958269.580:1464): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-145.40.82.49:22-147.75.109.163:60068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:51:09.730803 sshd[7089]: Accepted publickey for core from 147.75.109.163 port 60068 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 00:51:09.729000 audit[7089]: USER_ACCT pid=7089 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:09.732329 sshd[7089]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:51:09.734882 systemd-logind[1557]: New session 20 of user core. Nov 1 00:51:09.735534 systemd[1]: Started session-20.scope. Nov 1 00:51:09.814668 sshd[7089]: pam_unix(sshd:session): session closed for user core Nov 1 00:51:09.816133 systemd[1]: sshd@18-145.40.82.49:22-147.75.109.163:60068.service: Deactivated successfully. Nov 1 00:51:09.816607 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 00:51:09.816951 systemd-logind[1557]: Session 20 logged out. Waiting for processes to exit. Nov 1 00:51:09.817331 systemd-logind[1557]: Removed session 20. Nov 1 00:51:09.730000 audit[7089]: CRED_ACQ pid=7089 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:09.912931 kernel: audit: type=1101 audit(1761958269.729:1465): pid=7089 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:09.913006 kernel: audit: type=1103 audit(1761958269.730:1466): pid=7089 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:09.913026 kernel: audit: type=1006 audit(1761958269.730:1467): pid=7089 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Nov 1 00:51:09.730000 audit[7089]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff5ce319c0 a2=3 a3=0 items=0 ppid=1 pid=7089 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:51:10.063652 kernel: audit: type=1300 audit(1761958269.730:1467): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff5ce319c0 a2=3 a3=0 items=0 ppid=1 pid=7089 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:51:10.063701 kernel: audit: type=1327 audit(1761958269.730:1467): proctitle=737368643A20636F7265205B707269765D Nov 1 00:51:09.730000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:51:10.094180 kernel: audit: type=1105 audit(1761958269.736:1468): pid=7089 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:09.736000 audit[7089]: USER_START pid=7089 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:10.188706 kernel: audit: type=1103 audit(1761958269.737:1469): pid=7091 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:09.737000 audit[7091]: CRED_ACQ pid=7091 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:10.279510 kernel: audit: type=1106 audit(1761958269.813:1470): pid=7089 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:09.813000 audit[7089]: USER_END pid=7089 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:10.373644 kernel: audit: type=1104 audit(1761958269.813:1471): pid=7089 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:09.813000 audit[7089]: CRED_DISP pid=7089 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:09.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-145.40.82.49:22-147.75.109.163:60068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:51:13.529916 kubelet[2501]: E1101 00:51:13.529859 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-s7kdv" podUID="2f6bb8ca-d7c7-4d64-919c-e85097fdc068" Nov 1 00:51:14.827308 systemd[1]: Started sshd@19-145.40.82.49:22-147.75.109.163:50272.service. Nov 1 00:51:14.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-145.40.82.49:22-147.75.109.163:50272 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:51:14.870900 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:51:14.871028 kernel: audit: type=1130 audit(1761958274.826:1473): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-145.40.82.49:22-147.75.109.163:50272 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:51:14.982000 audit[7114]: USER_ACCT pid=7114 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:14.984437 sshd[7114]: Accepted publickey for core from 147.75.109.163 port 50272 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 00:51:14.985846 sshd[7114]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:51:14.988658 systemd-logind[1557]: New session 21 of user core. Nov 1 00:51:14.989189 systemd[1]: Started session-21.scope. Nov 1 00:51:15.070573 sshd[7114]: pam_unix(sshd:session): session closed for user core Nov 1 00:51:15.072477 systemd[1]: sshd@19-145.40.82.49:22-147.75.109.163:50272.service: Deactivated successfully. Nov 1 00:51:15.072837 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 00:51:15.073191 systemd-logind[1557]: Session 21 logged out. Waiting for processes to exit. Nov 1 00:51:15.073792 systemd[1]: Started sshd@20-145.40.82.49:22-147.75.109.163:50286.service. Nov 1 00:51:15.074315 systemd-logind[1557]: Removed session 21. Nov 1 00:51:14.984000 audit[7114]: CRED_ACQ pid=7114 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:15.166158 kernel: audit: type=1101 audit(1761958274.982:1474): pid=7114 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:15.166227 kernel: audit: type=1103 audit(1761958274.984:1475): pid=7114 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:15.166247 kernel: audit: type=1006 audit(1761958274.984:1476): pid=7114 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Nov 1 00:51:15.224760 kernel: audit: type=1300 audit(1761958274.984:1476): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd68ea80c0 a2=3 a3=0 items=0 ppid=1 pid=7114 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:51:14.984000 audit[7114]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd68ea80c0 a2=3 a3=0 items=0 ppid=1 pid=7114 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:51:15.249128 sshd[7138]: Accepted publickey for core from 147.75.109.163 port 50286 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 00:51:15.251169 sshd[7138]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:51:15.253845 systemd-logind[1557]: New session 22 of user core. Nov 1 00:51:15.254397 systemd[1]: Started session-22.scope. Nov 1 00:51:15.316752 kernel: audit: type=1327 audit(1761958274.984:1476): proctitle=737368643A20636F7265205B707269765D Nov 1 00:51:14.984000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:51:14.989000 audit[7114]: USER_START pid=7114 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:15.350958 sshd[7138]: pam_unix(sshd:session): session closed for user core Nov 1 00:51:15.353048 systemd[1]: sshd@20-145.40.82.49:22-147.75.109.163:50286.service: Deactivated successfully. Nov 1 00:51:15.353446 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 00:51:15.353837 systemd-logind[1557]: Session 22 logged out. Waiting for processes to exit. Nov 1 00:51:15.354568 systemd[1]: Started sshd@21-145.40.82.49:22-147.75.109.163:50300.service. Nov 1 00:51:15.355049 systemd-logind[1557]: Removed session 22. Nov 1 00:51:15.441773 kernel: audit: type=1105 audit(1761958274.989:1477): pid=7114 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:15.441854 kernel: audit: type=1103 audit(1761958274.990:1478): pid=7116 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:14.990000 audit[7116]: CRED_ACQ pid=7116 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:15.529445 kubelet[2501]: E1101 00:51:15.529423 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-vws84" podUID="af4cf953-58b5-4727-a1f5-dcd340748032" Nov 1 00:51:15.069000 audit[7114]: USER_END pid=7114 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:15.531589 kernel: audit: type=1106 audit(1761958275.069:1479): pid=7114 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:15.555315 sshd[7160]: Accepted publickey for core from 147.75.109.163 port 50300 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 00:51:15.556851 sshd[7160]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:51:15.559326 systemd-logind[1557]: New session 23 of user core. Nov 1 00:51:15.559790 systemd[1]: Started session-23.scope. Nov 1 00:51:15.069000 audit[7114]: CRED_DISP pid=7114 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:15.715956 kernel: audit: type=1104 audit(1761958275.069:1480): pid=7114 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:15.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-145.40.82.49:22-147.75.109.163:50272 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:51:15.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-145.40.82.49:22-147.75.109.163:50286 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:51:15.247000 audit[7138]: USER_ACCT pid=7138 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:15.249000 audit[7138]: CRED_ACQ pid=7138 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:15.249000 audit[7138]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffde42f0e10 a2=3 a3=0 items=0 ppid=1 pid=7138 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:51:15.249000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:51:15.255000 audit[7138]: USER_START pid=7138 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:15.255000 audit[7140]: CRED_ACQ pid=7140 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:15.350000 audit[7138]: USER_END pid=7138 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:15.350000 audit[7138]: CRED_DISP pid=7138 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:15.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-145.40.82.49:22-147.75.109.163:50286 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:51:15.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-145.40.82.49:22-147.75.109.163:50300 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:51:15.553000 audit[7160]: USER_ACCT pid=7160 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:15.555000 audit[7160]: CRED_ACQ pid=7160 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:15.555000 audit[7160]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd882bc3c0 a2=3 a3=0 items=0 ppid=1 pid=7160 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:51:15.555000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:51:15.561000 audit[7160]: USER_START pid=7160 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:15.562000 audit[7162]: CRED_ACQ pid=7162 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:16.191000 audit[7188]: NETFILTER_CFG table=filter:126 family=2 entries=26 op=nft_register_rule pid=7188 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:51:16.191000 audit[7188]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff97306e90 a2=0 a3=7fff97306e7c items=0 ppid=2683 pid=7188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:51:16.191000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:51:16.202116 sshd[7160]: pam_unix(sshd:session): session closed for user core Nov 1 00:51:16.200000 audit[7188]: NETFILTER_CFG table=nat:127 family=2 entries=20 op=nft_register_rule pid=7188 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:51:16.200000 audit[7188]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff97306e90 a2=0 a3=0 items=0 ppid=2683 pid=7188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:51:16.200000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:51:16.201000 audit[7160]: USER_END pid=7160 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:16.201000 audit[7160]: CRED_DISP pid=7160 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:16.204795 systemd[1]: sshd@21-145.40.82.49:22-147.75.109.163:50300.service: Deactivated successfully. Nov 1 00:51:16.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-145.40.82.49:22-147.75.109.163:50300 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:51:16.205363 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 00:51:16.205916 systemd-logind[1557]: Session 23 logged out. Waiting for processes to exit. Nov 1 00:51:16.207050 systemd[1]: Started sshd@22-145.40.82.49:22-147.75.109.163:50310.service. Nov 1 00:51:16.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-145.40.82.49:22-147.75.109.163:50310 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:51:16.207856 systemd-logind[1557]: Removed session 23. Nov 1 00:51:16.215000 audit[7194]: NETFILTER_CFG table=filter:128 family=2 entries=38 op=nft_register_rule pid=7194 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:51:16.215000 audit[7194]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc89e739f0 a2=0 a3=7ffc89e739dc items=0 ppid=2683 pid=7194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:51:16.215000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:51:16.229000 audit[7194]: NETFILTER_CFG table=nat:129 family=2 entries=20 op=nft_register_rule pid=7194 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:51:16.229000 audit[7194]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc89e739f0 a2=0 a3=0 items=0 ppid=2683 pid=7194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:51:16.229000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:51:16.240000 audit[7192]: USER_ACCT pid=7192 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:16.242462 sshd[7192]: Accepted publickey for core from 147.75.109.163 port 50310 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 00:51:16.241000 audit[7192]: CRED_ACQ pid=7192 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:16.241000 audit[7192]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffff0a6d0b0 a2=3 a3=0 items=0 ppid=1 pid=7192 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:51:16.241000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:51:16.243536 sshd[7192]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:51:16.247177 systemd-logind[1557]: New session 24 of user core. Nov 1 00:51:16.248072 systemd[1]: Started session-24.scope. Nov 1 00:51:16.250000 audit[7192]: USER_START pid=7192 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:16.251000 audit[7197]: CRED_ACQ pid=7197 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:16.521862 sshd[7192]: pam_unix(sshd:session): session closed for user core Nov 1 00:51:16.521000 audit[7192]: USER_END pid=7192 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:16.521000 audit[7192]: CRED_DISP pid=7192 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:16.524247 systemd[1]: sshd@22-145.40.82.49:22-147.75.109.163:50310.service: Deactivated successfully. Nov 1 00:51:16.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-145.40.82.49:22-147.75.109.163:50310 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:51:16.524724 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 00:51:16.525119 systemd-logind[1557]: Session 24 logged out. Waiting for processes to exit. Nov 1 00:51:16.525985 systemd[1]: Started sshd@23-145.40.82.49:22-147.75.109.163:50312.service. Nov 1 00:51:16.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-145.40.82.49:22-147.75.109.163:50312 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:51:16.526438 systemd-logind[1557]: Removed session 24. Nov 1 00:51:16.531547 kubelet[2501]: E1101 00:51:16.531522 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpkjg" podUID="ce15fc81-d33c-45b3-b08a-5d312fb076f0" Nov 1 00:51:16.555000 audit[7218]: USER_ACCT pid=7218 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:16.556791 sshd[7218]: Accepted publickey for core from 147.75.109.163 port 50312 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 00:51:16.555000 audit[7218]: CRED_ACQ pid=7218 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:16.555000 audit[7218]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdcc9b6950 a2=3 a3=0 items=0 ppid=1 pid=7218 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:51:16.555000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:51:16.557626 sshd[7218]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:51:16.560211 systemd-logind[1557]: New session 25 of user core. Nov 1 00:51:16.560711 systemd[1]: Started session-25.scope. Nov 1 00:51:16.561000 audit[7218]: USER_START pid=7218 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:16.562000 audit[7222]: CRED_ACQ pid=7222 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:16.687006 sshd[7218]: pam_unix(sshd:session): session closed for user core Nov 1 00:51:16.686000 audit[7218]: USER_END pid=7218 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:16.686000 audit[7218]: CRED_DISP pid=7218 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:16.688430 systemd[1]: sshd@23-145.40.82.49:22-147.75.109.163:50312.service: Deactivated successfully. Nov 1 00:51:16.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-145.40.82.49:22-147.75.109.163:50312 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:51:16.688907 systemd[1]: session-25.scope: Deactivated successfully. Nov 1 00:51:16.689281 systemd-logind[1557]: Session 25 logged out. Waiting for processes to exit. Nov 1 00:51:16.689763 systemd-logind[1557]: Removed session 25. Nov 1 00:51:19.531084 kubelet[2501]: E1101 00:51:19.530928 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxfpm" podUID="135646f8-0c66-45b5-80ce-9bb45c825de7" Nov 1 00:51:20.467000 audit[7245]: NETFILTER_CFG table=filter:130 family=2 entries=26 op=nft_register_rule pid=7245 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:51:20.505623 kernel: kauditd_printk_skb: 57 callbacks suppressed Nov 1 00:51:20.505699 kernel: audit: type=1325 audit(1761958280.467:1522): table=filter:130 family=2 entries=26 op=nft_register_rule pid=7245 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:51:20.467000 audit[7245]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcd7181750 a2=0 a3=7ffcd718173c items=0 ppid=2683 pid=7245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:51:20.660324 kernel: audit: type=1300 audit(1761958280.467:1522): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcd7181750 a2=0 a3=7ffcd718173c items=0 ppid=2683 pid=7245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:51:20.660398 kernel: audit: type=1327 audit(1761958280.467:1522): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:51:20.467000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:51:20.721000 audit[7245]: NETFILTER_CFG table=nat:131 family=2 entries=104 op=nft_register_chain pid=7245 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:51:20.721000 audit[7245]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffcd7181750 a2=0 a3=7ffcd718173c items=0 ppid=2683 pid=7245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:51:20.879052 kernel: audit: type=1325 audit(1761958280.721:1523): table=nat:131 family=2 entries=104 op=nft_register_chain pid=7245 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:51:20.879103 kernel: audit: type=1300 audit(1761958280.721:1523): arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffcd7181750 a2=0 a3=7ffcd718173c items=0 ppid=2683 pid=7245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:51:20.879120 kernel: audit: type=1327 audit(1761958280.721:1523): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:51:20.721000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:51:21.529982 kubelet[2501]: E1101 00:51:21.529951 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-859bf66984-8h8hn" podUID="9cd09edd-44db-4b31-b369-b622badfedc3" Nov 1 00:51:21.689587 systemd[1]: Started sshd@24-145.40.82.49:22-147.75.109.163:33070.service. Nov 1 00:51:21.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-145.40.82.49:22-147.75.109.163:33070 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:51:21.778554 kernel: audit: type=1130 audit(1761958281.688:1524): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-145.40.82.49:22-147.75.109.163:33070 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:51:21.801000 audit[7247]: USER_ACCT pid=7247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:21.802904 sshd[7247]: Accepted publickey for core from 147.75.109.163 port 33070 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 00:51:21.804083 sshd[7247]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:51:21.806724 systemd-logind[1557]: New session 26 of user core. Nov 1 00:51:21.807245 systemd[1]: Started session-26.scope. Nov 1 00:51:21.886274 sshd[7247]: pam_unix(sshd:session): session closed for user core Nov 1 00:51:21.887902 systemd[1]: sshd@24-145.40.82.49:22-147.75.109.163:33070.service: Deactivated successfully. Nov 1 00:51:21.888360 systemd[1]: session-26.scope: Deactivated successfully. Nov 1 00:51:21.888756 systemd-logind[1557]: Session 26 logged out. Waiting for processes to exit. Nov 1 00:51:21.889428 systemd-logind[1557]: Removed session 26. Nov 1 00:51:21.802000 audit[7247]: CRED_ACQ pid=7247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:21.986737 kernel: audit: type=1101 audit(1761958281.801:1525): pid=7247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:21.986809 kernel: audit: type=1103 audit(1761958281.802:1526): pid=7247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:21.986829 kernel: audit: type=1006 audit(1761958281.802:1527): pid=7247 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Nov 1 00:51:21.802000 audit[7247]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc799a85d0 a2=3 a3=0 items=0 ppid=1 pid=7247 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:51:21.802000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:51:21.809000 audit[7247]: USER_START pid=7247 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:21.810000 audit[7249]: CRED_ACQ pid=7249 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:21.885000 audit[7247]: USER_END pid=7247 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:21.885000 audit[7247]: CRED_DISP pid=7247 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:21.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-145.40.82.49:22-147.75.109.163:33070 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:51:22.530628 kubelet[2501]: E1101 00:51:22.530603 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c874fc74-qqjt8" podUID="30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d" Nov 1 00:51:26.529584 kubelet[2501]: E1101 00:51:26.529540 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-s7kdv" podUID="2f6bb8ca-d7c7-4d64-919c-e85097fdc068" Nov 1 00:51:26.529982 kubelet[2501]: E1101 00:51:26.529540 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d69b6c6c-vws84" podUID="af4cf953-58b5-4727-a1f5-dcd340748032" Nov 1 00:51:26.898609 systemd[1]: Started sshd@25-145.40.82.49:22-147.75.109.163:33086.service. Nov 1 00:51:26.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-145.40.82.49:22-147.75.109.163:33086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:51:26.938435 kernel: kauditd_printk_skb: 7 callbacks suppressed Nov 1 00:51:26.938552 kernel: audit: type=1130 audit(1761958286.897:1533): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-145.40.82.49:22-147.75.109.163:33086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:51:27.050000 audit[7273]: USER_ACCT pid=7273 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:27.052394 sshd[7273]: Accepted publickey for core from 147.75.109.163 port 33086 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 00:51:27.054852 sshd[7273]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:51:27.057716 systemd-logind[1557]: New session 27 of user core. Nov 1 00:51:27.058178 systemd[1]: Started session-27.scope. Nov 1 00:51:27.136136 sshd[7273]: pam_unix(sshd:session): session closed for user core Nov 1 00:51:27.137486 systemd[1]: sshd@25-145.40.82.49:22-147.75.109.163:33086.service: Deactivated successfully. Nov 1 00:51:27.137922 systemd[1]: session-27.scope: Deactivated successfully. Nov 1 00:51:27.138249 systemd-logind[1557]: Session 27 logged out. Waiting for processes to exit. Nov 1 00:51:27.139106 systemd-logind[1557]: Removed session 27. Nov 1 00:51:27.053000 audit[7273]: CRED_ACQ pid=7273 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:27.235352 kernel: audit: type=1101 audit(1761958287.050:1534): pid=7273 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:27.235449 kernel: audit: type=1103 audit(1761958287.053:1535): pid=7273 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:27.235472 kernel: audit: type=1006 audit(1761958287.053:1536): pid=7273 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Nov 1 00:51:27.053000 audit[7273]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffa1512230 a2=3 a3=0 items=0 ppid=1 pid=7273 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:51:27.387465 kernel: audit: type=1300 audit(1761958287.053:1536): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffa1512230 a2=3 a3=0 items=0 ppid=1 pid=7273 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:51:27.387529 kernel: audit: type=1327 audit(1761958287.053:1536): proctitle=737368643A20636F7265205B707269765D Nov 1 00:51:27.053000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:51:27.059000 audit[7273]: USER_START pid=7273 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:27.418506 kernel: audit: type=1105 audit(1761958287.059:1537): pid=7273 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:27.059000 audit[7275]: CRED_ACQ pid=7275 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:27.530354 kubelet[2501]: E1101 00:51:27.530304 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpkjg" podUID="ce15fc81-d33c-45b3-b08a-5d312fb076f0" Nov 1 00:51:27.603043 kernel: audit: type=1103 audit(1761958287.059:1538): pid=7275 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:27.603095 kernel: audit: type=1106 audit(1761958287.135:1539): pid=7273 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:27.135000 audit[7273]: USER_END pid=7273 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:27.135000 audit[7273]: CRED_DISP pid=7273 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:27.699514 kernel: audit: type=1104 audit(1761958287.135:1540): pid=7273 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:27.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-145.40.82.49:22-147.75.109.163:33086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:51:27.970618 update_engine[1559]: I1101 00:51:27.970594 1559 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Nov 1 00:51:27.970618 update_engine[1559]: I1101 00:51:27.970613 1559 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Nov 1 00:51:27.971753 update_engine[1559]: I1101 00:51:27.971714 1559 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Nov 1 00:51:27.971977 update_engine[1559]: I1101 00:51:27.971936 1559 omaha_request_params.cc:62] Current group set to lts Nov 1 00:51:27.972016 update_engine[1559]: I1101 00:51:27.972010 1559 update_attempter.cc:499] Already updated boot flags. Skipping. Nov 1 00:51:27.972016 update_engine[1559]: I1101 00:51:27.972014 1559 update_attempter.cc:643] Scheduling an action processor start. Nov 1 00:51:27.972057 update_engine[1559]: I1101 00:51:27.972023 1559 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 1 00:51:27.972057 update_engine[1559]: I1101 00:51:27.972038 1559 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Nov 1 00:51:27.972098 update_engine[1559]: I1101 00:51:27.972069 1559 omaha_request_action.cc:270] Posting an Omaha request to disabled Nov 1 00:51:27.972098 update_engine[1559]: I1101 00:51:27.972072 1559 omaha_request_action.cc:271] Request: Nov 1 00:51:27.972098 update_engine[1559]: Nov 1 00:51:27.972098 update_engine[1559]: Nov 1 00:51:27.972098 update_engine[1559]: Nov 1 00:51:27.972098 update_engine[1559]: Nov 1 00:51:27.972098 update_engine[1559]: Nov 1 00:51:27.972098 update_engine[1559]: Nov 1 00:51:27.972098 update_engine[1559]: Nov 1 00:51:27.972098 update_engine[1559]: Nov 1 00:51:27.972098 update_engine[1559]: I1101 00:51:27.972076 1559 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 1 00:51:27.972280 locksmithd[1600]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Nov 1 00:51:27.972719 update_engine[1559]: I1101 00:51:27.972683 1559 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 1 00:51:27.972761 update_engine[1559]: E1101 00:51:27.972736 1559 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 1 00:51:27.972786 update_engine[1559]: I1101 00:51:27.972769 1559 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Nov 1 00:51:32.142422 systemd[1]: Started sshd@26-145.40.82.49:22-147.75.109.163:33932.service. Nov 1 00:51:32.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-145.40.82.49:22-147.75.109.163:33932 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:51:32.169697 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:51:32.169808 kernel: audit: type=1130 audit(1761958292.141:1542): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-145.40.82.49:22-147.75.109.163:33932 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:51:32.281000 audit[7297]: USER_ACCT pid=7297 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:32.283047 sshd[7297]: Accepted publickey for core from 147.75.109.163 port 33932 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 00:51:32.284826 sshd[7297]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:51:32.287289 systemd-logind[1557]: New session 28 of user core. Nov 1 00:51:32.287820 systemd[1]: Started session-28.scope. Nov 1 00:51:32.283000 audit[7297]: CRED_ACQ pid=7297 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:32.378054 sshd[7297]: pam_unix(sshd:session): session closed for user core Nov 1 00:51:32.379332 systemd[1]: sshd@26-145.40.82.49:22-147.75.109.163:33932.service: Deactivated successfully. Nov 1 00:51:32.379819 systemd[1]: session-28.scope: Deactivated successfully. Nov 1 00:51:32.380126 systemd-logind[1557]: Session 28 logged out. Waiting for processes to exit. Nov 1 00:51:32.380515 systemd-logind[1557]: Removed session 28. Nov 1 00:51:32.465668 kernel: audit: type=1101 audit(1761958292.281:1543): pid=7297 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:32.465750 kernel: audit: type=1103 audit(1761958292.283:1544): pid=7297 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:32.465770 kernel: audit: type=1006 audit(1761958292.283:1545): pid=7297 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Nov 1 00:51:32.524366 kernel: audit: type=1300 audit(1761958292.283:1545): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff2dd91300 a2=3 a3=0 items=0 ppid=1 pid=7297 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:51:32.283000 audit[7297]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff2dd91300 a2=3 a3=0 items=0 ppid=1 pid=7297 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:51:32.529356 kubelet[2501]: E1101 00:51:32.529334 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxfpm" podUID="135646f8-0c66-45b5-80ce-9bb45c825de7" Nov 1 00:51:32.283000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:51:32.647028 kernel: audit: type=1327 audit(1761958292.283:1545): proctitle=737368643A20636F7265205B707269765D Nov 1 00:51:32.647119 kernel: audit: type=1105 audit(1761958292.288:1546): pid=7297 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:32.288000 audit[7297]: USER_START pid=7297 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:32.288000 audit[7299]: CRED_ACQ pid=7299 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:32.830833 kernel: audit: type=1103 audit(1761958292.288:1547): pid=7299 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:32.830918 kernel: audit: type=1106 audit(1761958292.377:1548): pid=7297 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:32.377000 audit[7297]: USER_END pid=7297 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:32.377000 audit[7297]: CRED_DISP pid=7297 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:32.926631 kernel: audit: type=1104 audit(1761958292.377:1549): pid=7297 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:32.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-145.40.82.49:22-147.75.109.163:33932 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:51:34.529832 kubelet[2501]: E1101 00:51:34.529801 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-859bf66984-8h8hn" podUID="9cd09edd-44db-4b31-b369-b622badfedc3" Nov 1 00:51:37.380900 systemd[1]: Started sshd@27-145.40.82.49:22-147.75.109.163:33936.service. Nov 1 00:51:37.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-145.40.82.49:22-147.75.109.163:33936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:51:37.407611 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:51:37.407679 kernel: audit: type=1130 audit(1761958297.379:1551): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-145.40.82.49:22-147.75.109.163:33936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:51:37.438681 sshd[7354]: Accepted publickey for core from 147.75.109.163 port 33936 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 00:51:37.439596 sshd[7354]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:51:37.442359 systemd-logind[1557]: New session 29 of user core. Nov 1 00:51:37.443256 systemd[1]: Started session-29.scope. Nov 1 00:51:37.437000 audit[7354]: USER_ACCT pid=7354 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:37.521434 sshd[7354]: pam_unix(sshd:session): session closed for user core Nov 1 00:51:37.522912 systemd[1]: sshd@27-145.40.82.49:22-147.75.109.163:33936.service: Deactivated successfully. Nov 1 00:51:37.523413 systemd[1]: session-29.scope: Deactivated successfully. Nov 1 00:51:37.523795 systemd-logind[1557]: Session 29 logged out. Waiting for processes to exit. Nov 1 00:51:37.524233 systemd-logind[1557]: Removed session 29. Nov 1 00:51:37.530390 kubelet[2501]: E1101 00:51:37.530360 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c874fc74-qqjt8" podUID="30a69d9b-8fc2-4064-9c4c-2a9a4d33a87d" Nov 1 00:51:37.588809 kernel: audit: type=1101 audit(1761958297.437:1552): pid=7354 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:37.588917 kernel: audit: type=1103 audit(1761958297.437:1553): pid=7354 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:37.437000 audit[7354]: CRED_ACQ pid=7354 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:37.738129 kernel: audit: type=1006 audit(1761958297.437:1554): pid=7354 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Nov 1 00:51:37.738262 kernel: audit: type=1300 audit(1761958297.437:1554): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe8a91f510 a2=3 a3=0 items=0 ppid=1 pid=7354 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:51:37.437000 audit[7354]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe8a91f510 a2=3 a3=0 items=0 ppid=1 pid=7354 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:51:37.437000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:51:37.860814 kernel: audit: type=1327 audit(1761958297.437:1554): proctitle=737368643A20636F7265205B707269765D Nov 1 00:51:37.860896 kernel: audit: type=1105 audit(1761958297.444:1555): pid=7354 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:37.444000 audit[7354]: USER_START pid=7354 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:37.444000 audit[7356]: CRED_ACQ pid=7356 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:37.970611 update_engine[1559]: I1101 00:51:37.970587 1559 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 1 00:51:37.970950 update_engine[1559]: I1101 00:51:37.970703 1559 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 1 00:51:37.970950 update_engine[1559]: E1101 00:51:37.970750 1559 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 1 00:51:37.970950 update_engine[1559]: I1101 00:51:37.970789 1559 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Nov 1 00:51:38.044643 kernel: audit: type=1103 audit(1761958297.444:1556): pid=7356 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:38.044717 kernel: audit: type=1106 audit(1761958297.520:1557): pid=7354 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:37.520000 audit[7354]: USER_END pid=7354 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:37.520000 audit[7354]: CRED_DISP pid=7354 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:38.229515 kernel: audit: type=1104 audit(1761958297.520:1558): pid=7354 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:51:37.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-145.40.82.49:22-147.75.109.163:33936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'