Nov 1 01:28:08.553730 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Oct 31 23:02:53 -00 2025 Nov 1 01:28:08.553743 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 01:28:08.553750 kernel: BIOS-provided physical RAM map: Nov 1 01:28:08.553754 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Nov 1 01:28:08.553757 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Nov 1 01:28:08.553761 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Nov 1 01:28:08.553766 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Nov 1 01:28:08.553770 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Nov 1 01:28:08.553774 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b2cfff] usable Nov 1 01:28:08.553777 kernel: BIOS-e820: [mem 0x0000000081b2d000-0x0000000081b2dfff] ACPI NVS Nov 1 01:28:08.553782 kernel: BIOS-e820: [mem 0x0000000081b2e000-0x0000000081b2efff] reserved Nov 1 01:28:08.553786 kernel: BIOS-e820: [mem 0x0000000081b2f000-0x000000008afccfff] usable Nov 1 01:28:08.553790 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Nov 1 01:28:08.553793 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Nov 1 01:28:08.553798 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Nov 1 01:28:08.553804 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Nov 1 01:28:08.553808 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Nov 1 01:28:08.553812 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Nov 1 01:28:08.553816 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 1 01:28:08.553820 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Nov 1 01:28:08.553824 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Nov 1 01:28:08.553829 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Nov 1 01:28:08.553833 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Nov 1 01:28:08.553837 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Nov 1 01:28:08.553841 kernel: NX (Execute Disable) protection: active Nov 1 01:28:08.553845 kernel: SMBIOS 3.2.1 present. Nov 1 01:28:08.553850 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Nov 1 01:28:08.553854 kernel: tsc: Detected 3400.000 MHz processor Nov 1 01:28:08.553859 kernel: tsc: Detected 3399.906 MHz TSC Nov 1 01:28:08.553863 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 01:28:08.553868 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 01:28:08.553872 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Nov 1 01:28:08.553876 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 01:28:08.553881 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Nov 1 01:28:08.553885 kernel: Using GB pages for direct mapping Nov 1 01:28:08.553889 kernel: ACPI: Early table checksum verification disabled Nov 1 01:28:08.553894 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Nov 1 01:28:08.553899 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Nov 1 01:28:08.553903 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Nov 1 01:28:08.553907 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Nov 1 01:28:08.553914 kernel: ACPI: FACS 0x000000008C66CF80 000040 Nov 1 01:28:08.553918 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Nov 1 01:28:08.553924 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Nov 1 01:28:08.553928 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Nov 1 01:28:08.553933 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Nov 1 01:28:08.553938 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Nov 1 01:28:08.553942 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Nov 1 01:28:08.553947 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Nov 1 01:28:08.553952 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Nov 1 01:28:08.553956 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 01:28:08.553962 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Nov 1 01:28:08.553966 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Nov 1 01:28:08.553971 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 01:28:08.553976 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 01:28:08.553980 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Nov 1 01:28:08.553985 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Nov 1 01:28:08.553989 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 01:28:08.553994 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Nov 1 01:28:08.554000 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Nov 1 01:28:08.554004 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Nov 1 01:28:08.554009 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Nov 1 01:28:08.554014 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Nov 1 01:28:08.554018 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Nov 1 01:28:08.554023 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Nov 1 01:28:08.554027 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Nov 1 01:28:08.554032 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Nov 1 01:28:08.554037 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Nov 1 01:28:08.554042 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Nov 1 01:28:08.554047 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Nov 1 01:28:08.554052 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Nov 1 01:28:08.554056 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Nov 1 01:28:08.554061 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Nov 1 01:28:08.554066 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Nov 1 01:28:08.554071 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Nov 1 01:28:08.554075 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Nov 1 01:28:08.554081 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Nov 1 01:28:08.554085 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Nov 1 01:28:08.554090 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Nov 1 01:28:08.554094 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Nov 1 01:28:08.554099 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Nov 1 01:28:08.554104 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Nov 1 01:28:08.554108 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Nov 1 01:28:08.554113 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Nov 1 01:28:08.554118 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Nov 1 01:28:08.554123 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Nov 1 01:28:08.554128 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Nov 1 01:28:08.554132 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Nov 1 01:28:08.554137 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Nov 1 01:28:08.554141 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Nov 1 01:28:08.554146 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Nov 1 01:28:08.554151 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Nov 1 01:28:08.554155 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Nov 1 01:28:08.554160 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Nov 1 01:28:08.554165 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Nov 1 01:28:08.554170 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Nov 1 01:28:08.554175 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Nov 1 01:28:08.554179 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Nov 1 01:28:08.554184 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Nov 1 01:28:08.554189 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Nov 1 01:28:08.554193 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Nov 1 01:28:08.554198 kernel: No NUMA configuration found Nov 1 01:28:08.554203 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Nov 1 01:28:08.554208 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Nov 1 01:28:08.554213 kernel: Zone ranges: Nov 1 01:28:08.554218 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 01:28:08.554222 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 1 01:28:08.554227 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Nov 1 01:28:08.554232 kernel: Movable zone start for each node Nov 1 01:28:08.554236 kernel: Early memory node ranges Nov 1 01:28:08.554241 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Nov 1 01:28:08.554246 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Nov 1 01:28:08.554250 kernel: node 0: [mem 0x0000000040400000-0x0000000081b2cfff] Nov 1 01:28:08.554256 kernel: node 0: [mem 0x0000000081b2f000-0x000000008afccfff] Nov 1 01:28:08.554261 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Nov 1 01:28:08.554265 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Nov 1 01:28:08.554270 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Nov 1 01:28:08.554275 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Nov 1 01:28:08.554279 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 01:28:08.554287 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Nov 1 01:28:08.554293 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Nov 1 01:28:08.554298 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Nov 1 01:28:08.554303 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Nov 1 01:28:08.554309 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Nov 1 01:28:08.554314 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Nov 1 01:28:08.554319 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Nov 1 01:28:08.554324 kernel: ACPI: PM-Timer IO Port: 0x1808 Nov 1 01:28:08.554329 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Nov 1 01:28:08.554334 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Nov 1 01:28:08.554339 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Nov 1 01:28:08.554345 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Nov 1 01:28:08.554350 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Nov 1 01:28:08.554355 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Nov 1 01:28:08.554359 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Nov 1 01:28:08.554364 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Nov 1 01:28:08.554369 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Nov 1 01:28:08.554374 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Nov 1 01:28:08.554379 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Nov 1 01:28:08.554384 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Nov 1 01:28:08.554390 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Nov 1 01:28:08.554397 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Nov 1 01:28:08.554426 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Nov 1 01:28:08.554431 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Nov 1 01:28:08.554451 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Nov 1 01:28:08.554470 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 1 01:28:08.554475 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 01:28:08.554481 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 01:28:08.554486 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 1 01:28:08.554491 kernel: TSC deadline timer available Nov 1 01:28:08.554496 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Nov 1 01:28:08.554501 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Nov 1 01:28:08.554506 kernel: Booting paravirtualized kernel on bare hardware Nov 1 01:28:08.554511 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 01:28:08.554517 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Nov 1 01:28:08.554522 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Nov 1 01:28:08.554527 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Nov 1 01:28:08.554532 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Nov 1 01:28:08.554537 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Nov 1 01:28:08.554542 kernel: Policy zone: Normal Nov 1 01:28:08.554548 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 01:28:08.554553 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 1 01:28:08.554558 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Nov 1 01:28:08.554563 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Nov 1 01:28:08.554568 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 01:28:08.554573 kernel: Memory: 32722604K/33452980K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47496K init, 4084K bss, 730116K reserved, 0K cma-reserved) Nov 1 01:28:08.554579 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Nov 1 01:28:08.554584 kernel: ftrace: allocating 34614 entries in 136 pages Nov 1 01:28:08.554589 kernel: ftrace: allocated 136 pages with 2 groups Nov 1 01:28:08.554594 kernel: rcu: Hierarchical RCU implementation. Nov 1 01:28:08.554600 kernel: rcu: RCU event tracing is enabled. Nov 1 01:28:08.554605 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Nov 1 01:28:08.554610 kernel: Rude variant of Tasks RCU enabled. Nov 1 01:28:08.554615 kernel: Tracing variant of Tasks RCU enabled. Nov 1 01:28:08.554621 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 01:28:08.554626 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Nov 1 01:28:08.554631 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Nov 1 01:28:08.554636 kernel: random: crng init done Nov 1 01:28:08.554641 kernel: Console: colour dummy device 80x25 Nov 1 01:28:08.554646 kernel: printk: console [tty0] enabled Nov 1 01:28:08.554651 kernel: printk: console [ttyS1] enabled Nov 1 01:28:08.554656 kernel: ACPI: Core revision 20210730 Nov 1 01:28:08.554661 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Nov 1 01:28:08.554666 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 01:28:08.554672 kernel: DMAR: Host address width 39 Nov 1 01:28:08.554677 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Nov 1 01:28:08.554682 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Nov 1 01:28:08.554687 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Nov 1 01:28:08.554692 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Nov 1 01:28:08.554697 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Nov 1 01:28:08.554702 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Nov 1 01:28:08.554707 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Nov 1 01:28:08.554712 kernel: x2apic enabled Nov 1 01:28:08.554718 kernel: Switched APIC routing to cluster x2apic. Nov 1 01:28:08.554723 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Nov 1 01:28:08.554728 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Nov 1 01:28:08.554733 kernel: CPU0: Thermal monitoring enabled (TM1) Nov 1 01:28:08.554738 kernel: process: using mwait in idle threads Nov 1 01:28:08.554743 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 1 01:28:08.554748 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 1 01:28:08.554753 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 01:28:08.554758 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Nov 1 01:28:08.554763 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Nov 1 01:28:08.554768 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Nov 1 01:28:08.554773 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Nov 1 01:28:08.554778 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Nov 1 01:28:08.554783 kernel: RETBleed: Mitigation: Enhanced IBRS Nov 1 01:28:08.554788 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 01:28:08.554793 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Nov 1 01:28:08.554798 kernel: TAA: Mitigation: TSX disabled Nov 1 01:28:08.554803 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Nov 1 01:28:08.554808 kernel: SRBDS: Mitigation: Microcode Nov 1 01:28:08.554813 kernel: GDS: Vulnerable: No microcode Nov 1 01:28:08.554819 kernel: active return thunk: its_return_thunk Nov 1 01:28:08.554824 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 1 01:28:08.554829 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 01:28:08.554834 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 01:28:08.554839 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 01:28:08.554844 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Nov 1 01:28:08.554848 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Nov 1 01:28:08.554853 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 01:28:08.554858 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Nov 1 01:28:08.554863 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Nov 1 01:28:08.554868 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Nov 1 01:28:08.554873 kernel: Freeing SMP alternatives memory: 32K Nov 1 01:28:08.554879 kernel: pid_max: default: 32768 minimum: 301 Nov 1 01:28:08.554884 kernel: LSM: Security Framework initializing Nov 1 01:28:08.554888 kernel: SELinux: Initializing. Nov 1 01:28:08.554893 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 01:28:08.554898 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 01:28:08.554903 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Nov 1 01:28:08.554908 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Nov 1 01:28:08.554913 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Nov 1 01:28:08.554918 kernel: ... version: 4 Nov 1 01:28:08.554923 kernel: ... bit width: 48 Nov 1 01:28:08.554929 kernel: ... generic registers: 4 Nov 1 01:28:08.554934 kernel: ... value mask: 0000ffffffffffff Nov 1 01:28:08.554939 kernel: ... max period: 00007fffffffffff Nov 1 01:28:08.554944 kernel: ... fixed-purpose events: 3 Nov 1 01:28:08.554949 kernel: ... event mask: 000000070000000f Nov 1 01:28:08.554954 kernel: signal: max sigframe size: 2032 Nov 1 01:28:08.554959 kernel: rcu: Hierarchical SRCU implementation. Nov 1 01:28:08.554964 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Nov 1 01:28:08.554969 kernel: smp: Bringing up secondary CPUs ... Nov 1 01:28:08.554974 kernel: x86: Booting SMP configuration: Nov 1 01:28:08.554980 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Nov 1 01:28:08.554985 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 1 01:28:08.554990 kernel: #9 #10 #11 #12 #13 #14 #15 Nov 1 01:28:08.554995 kernel: smp: Brought up 1 node, 16 CPUs Nov 1 01:28:08.555000 kernel: smpboot: Max logical packages: 1 Nov 1 01:28:08.555005 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Nov 1 01:28:08.555010 kernel: devtmpfs: initialized Nov 1 01:28:08.555015 kernel: x86/mm: Memory block size: 128MB Nov 1 01:28:08.555021 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b2d000-0x81b2dfff] (4096 bytes) Nov 1 01:28:08.555026 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Nov 1 01:28:08.555031 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 01:28:08.555036 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Nov 1 01:28:08.555041 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 01:28:08.555046 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 01:28:08.555051 kernel: audit: initializing netlink subsys (disabled) Nov 1 01:28:08.555056 kernel: audit: type=2000 audit(1761960483.041:1): state=initialized audit_enabled=0 res=1 Nov 1 01:28:08.555061 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 01:28:08.555067 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 01:28:08.555072 kernel: cpuidle: using governor menu Nov 1 01:28:08.555077 kernel: ACPI: bus type PCI registered Nov 1 01:28:08.555082 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 01:28:08.555087 kernel: dca service started, version 1.12.1 Nov 1 01:28:08.555092 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Nov 1 01:28:08.555097 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Nov 1 01:28:08.555102 kernel: PCI: Using configuration type 1 for base access Nov 1 01:28:08.555107 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Nov 1 01:28:08.555112 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 01:28:08.555117 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 01:28:08.555122 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 01:28:08.555127 kernel: ACPI: Added _OSI(Module Device) Nov 1 01:28:08.555132 kernel: ACPI: Added _OSI(Processor Device) Nov 1 01:28:08.555137 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 01:28:08.555142 kernel: ACPI: Added _OSI(Linux-Dell-Video) Nov 1 01:28:08.555147 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Nov 1 01:28:08.555152 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Nov 1 01:28:08.555158 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Nov 1 01:28:08.555163 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:28:08.555168 kernel: ACPI: SSDT 0xFFFF96748021BA00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Nov 1 01:28:08.555173 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Nov 1 01:28:08.555178 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:28:08.555183 kernel: ACPI: SSDT 0xFFFF967481AE3400 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Nov 1 01:28:08.555188 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:28:08.555193 kernel: ACPI: SSDT 0xFFFF967481A5B800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Nov 1 01:28:08.555198 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:28:08.555204 kernel: ACPI: SSDT 0xFFFF967481B4F800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Nov 1 01:28:08.555209 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:28:08.555213 kernel: ACPI: SSDT 0xFFFF96748014D000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Nov 1 01:28:08.555218 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:28:08.555223 kernel: ACPI: SSDT 0xFFFF967481AE7000 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Nov 1 01:28:08.555228 kernel: ACPI: Interpreter enabled Nov 1 01:28:08.555233 kernel: ACPI: PM: (supports S0 S5) Nov 1 01:28:08.555238 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 01:28:08.555243 kernel: HEST: Enabling Firmware First mode for corrected errors. Nov 1 01:28:08.555248 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Nov 1 01:28:08.555254 kernel: HEST: Table parsing has been initialized. Nov 1 01:28:08.555259 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Nov 1 01:28:08.555264 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 01:28:08.555269 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Nov 1 01:28:08.555274 kernel: ACPI: PM: Power Resource [USBC] Nov 1 01:28:08.555279 kernel: ACPI: PM: Power Resource [V0PR] Nov 1 01:28:08.555284 kernel: ACPI: PM: Power Resource [V1PR] Nov 1 01:28:08.555289 kernel: ACPI: PM: Power Resource [V2PR] Nov 1 01:28:08.555294 kernel: ACPI: PM: Power Resource [WRST] Nov 1 01:28:08.555300 kernel: ACPI: PM: Power Resource [FN00] Nov 1 01:28:08.555305 kernel: ACPI: PM: Power Resource [FN01] Nov 1 01:28:08.555310 kernel: ACPI: PM: Power Resource [FN02] Nov 1 01:28:08.555315 kernel: ACPI: PM: Power Resource [FN03] Nov 1 01:28:08.555320 kernel: ACPI: PM: Power Resource [FN04] Nov 1 01:28:08.555324 kernel: ACPI: PM: Power Resource [PIN] Nov 1 01:28:08.555329 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Nov 1 01:28:08.555401 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 1 01:28:08.555493 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Nov 1 01:28:08.555537 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Nov 1 01:28:08.555545 kernel: PCI host bridge to bus 0000:00 Nov 1 01:28:08.555591 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 01:28:08.555632 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 01:28:08.555671 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 01:28:08.555711 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Nov 1 01:28:08.555751 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Nov 1 01:28:08.555790 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Nov 1 01:28:08.555843 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Nov 1 01:28:08.555894 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Nov 1 01:28:08.555941 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Nov 1 01:28:08.555992 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Nov 1 01:28:08.556040 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Nov 1 01:28:08.556089 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Nov 1 01:28:08.556133 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Nov 1 01:28:08.556183 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Nov 1 01:28:08.556227 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Nov 1 01:28:08.556273 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Nov 1 01:28:08.556323 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Nov 1 01:28:08.556368 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Nov 1 01:28:08.556438 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Nov 1 01:28:08.556502 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Nov 1 01:28:08.556548 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 1 01:28:08.556596 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Nov 1 01:28:08.556644 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 1 01:28:08.556693 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Nov 1 01:28:08.556741 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Nov 1 01:28:08.556784 kernel: pci 0000:00:16.0: PME# supported from D3hot Nov 1 01:28:08.556832 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Nov 1 01:28:08.556876 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Nov 1 01:28:08.556920 kernel: pci 0000:00:16.1: PME# supported from D3hot Nov 1 01:28:08.556970 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Nov 1 01:28:08.557014 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Nov 1 01:28:08.557058 kernel: pci 0000:00:16.4: PME# supported from D3hot Nov 1 01:28:08.557105 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Nov 1 01:28:08.557149 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Nov 1 01:28:08.557195 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Nov 1 01:28:08.557244 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Nov 1 01:28:08.557291 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Nov 1 01:28:08.557334 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Nov 1 01:28:08.557379 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Nov 1 01:28:08.557464 kernel: pci 0000:00:17.0: PME# supported from D3hot Nov 1 01:28:08.557528 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Nov 1 01:28:08.557573 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Nov 1 01:28:08.557622 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Nov 1 01:28:08.557669 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Nov 1 01:28:08.557718 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Nov 1 01:28:08.557764 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Nov 1 01:28:08.557814 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Nov 1 01:28:08.557860 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Nov 1 01:28:08.557910 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Nov 1 01:28:08.557956 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Nov 1 01:28:08.558005 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Nov 1 01:28:08.558050 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 1 01:28:08.558101 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Nov 1 01:28:08.558149 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Nov 1 01:28:08.558195 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Nov 1 01:28:08.558239 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Nov 1 01:28:08.558289 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Nov 1 01:28:08.558333 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Nov 1 01:28:08.558386 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Nov 1 01:28:08.558438 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Nov 1 01:28:08.558484 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Nov 1 01:28:08.558531 kernel: pci 0000:01:00.0: PME# supported from D3cold Nov 1 01:28:08.558577 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Nov 1 01:28:08.558623 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Nov 1 01:28:08.558674 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Nov 1 01:28:08.558723 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Nov 1 01:28:08.558770 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Nov 1 01:28:08.558816 kernel: pci 0000:01:00.1: PME# supported from D3cold Nov 1 01:28:08.558862 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Nov 1 01:28:08.558909 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Nov 1 01:28:08.558954 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 1 01:28:08.558999 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Nov 1 01:28:08.559043 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Nov 1 01:28:08.559088 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Nov 1 01:28:08.559139 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Nov 1 01:28:08.559188 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Nov 1 01:28:08.559284 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Nov 1 01:28:08.559330 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Nov 1 01:28:08.559376 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Nov 1 01:28:08.559425 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Nov 1 01:28:08.559474 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Nov 1 01:28:08.559519 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Nov 1 01:28:08.559564 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Nov 1 01:28:08.559614 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Nov 1 01:28:08.559662 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Nov 1 01:28:08.559708 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Nov 1 01:28:08.559754 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Nov 1 01:28:08.559802 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Nov 1 01:28:08.559847 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Nov 1 01:28:08.559893 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Nov 1 01:28:08.559938 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Nov 1 01:28:08.559983 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Nov 1 01:28:08.560028 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Nov 1 01:28:08.560080 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Nov 1 01:28:08.560126 kernel: pci 0000:06:00.0: enabling Extended Tags Nov 1 01:28:08.560175 kernel: pci 0000:06:00.0: supports D1 D2 Nov 1 01:28:08.560221 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 1 01:28:08.560265 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Nov 1 01:28:08.560310 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Nov 1 01:28:08.560354 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Nov 1 01:28:08.560406 kernel: pci_bus 0000:07: extended config space not accessible Nov 1 01:28:08.560462 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Nov 1 01:28:08.560515 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Nov 1 01:28:08.560563 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Nov 1 01:28:08.560611 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Nov 1 01:28:08.560660 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 01:28:08.560709 kernel: pci 0000:07:00.0: supports D1 D2 Nov 1 01:28:08.560757 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 1 01:28:08.560805 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Nov 1 01:28:08.560851 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Nov 1 01:28:08.560899 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Nov 1 01:28:08.560907 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Nov 1 01:28:08.560913 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Nov 1 01:28:08.560918 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Nov 1 01:28:08.560924 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Nov 1 01:28:08.560929 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Nov 1 01:28:08.560934 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Nov 1 01:28:08.560940 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Nov 1 01:28:08.560946 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Nov 1 01:28:08.560952 kernel: iommu: Default domain type: Translated Nov 1 01:28:08.560957 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 01:28:08.561003 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Nov 1 01:28:08.561053 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 01:28:08.561101 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Nov 1 01:28:08.561108 kernel: vgaarb: loaded Nov 1 01:28:08.561114 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 1 01:28:08.561121 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 1 01:28:08.561127 kernel: PTP clock support registered Nov 1 01:28:08.561132 kernel: PCI: Using ACPI for IRQ routing Nov 1 01:28:08.561138 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 01:28:08.561143 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Nov 1 01:28:08.561148 kernel: e820: reserve RAM buffer [mem 0x81b2d000-0x83ffffff] Nov 1 01:28:08.561153 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Nov 1 01:28:08.561158 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Nov 1 01:28:08.561164 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Nov 1 01:28:08.561169 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Nov 1 01:28:08.561175 kernel: clocksource: Switched to clocksource tsc-early Nov 1 01:28:08.561180 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 01:28:08.561185 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 01:28:08.561191 kernel: pnp: PnP ACPI init Nov 1 01:28:08.561237 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Nov 1 01:28:08.561281 kernel: pnp 00:02: [dma 0 disabled] Nov 1 01:28:08.561326 kernel: pnp 00:03: [dma 0 disabled] Nov 1 01:28:08.561372 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Nov 1 01:28:08.561433 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Nov 1 01:28:08.561479 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Nov 1 01:28:08.561522 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Nov 1 01:28:08.561563 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Nov 1 01:28:08.561601 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Nov 1 01:28:08.561643 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Nov 1 01:28:08.561682 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Nov 1 01:28:08.561720 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Nov 1 01:28:08.561759 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Nov 1 01:28:08.561798 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Nov 1 01:28:08.561840 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Nov 1 01:28:08.561880 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Nov 1 01:28:08.561920 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Nov 1 01:28:08.561959 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Nov 1 01:28:08.561998 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Nov 1 01:28:08.562037 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Nov 1 01:28:08.562076 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Nov 1 01:28:08.562119 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Nov 1 01:28:08.562126 kernel: pnp: PnP ACPI: found 10 devices Nov 1 01:28:08.562133 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 01:28:08.562139 kernel: NET: Registered PF_INET protocol family Nov 1 01:28:08.562144 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 01:28:08.562149 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Nov 1 01:28:08.562155 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 01:28:08.562160 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 01:28:08.562165 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Nov 1 01:28:08.562170 kernel: TCP: Hash tables configured (established 262144 bind 65536) Nov 1 01:28:08.562176 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 1 01:28:08.562182 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 1 01:28:08.562187 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 01:28:08.562192 kernel: NET: Registered PF_XDP protocol family Nov 1 01:28:08.562236 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Nov 1 01:28:08.562281 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Nov 1 01:28:08.562324 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Nov 1 01:28:08.562370 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Nov 1 01:28:08.562439 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Nov 1 01:28:08.562489 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Nov 1 01:28:08.562535 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Nov 1 01:28:08.562580 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 1 01:28:08.562626 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Nov 1 01:28:08.562671 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Nov 1 01:28:08.562717 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Nov 1 01:28:08.562763 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Nov 1 01:28:08.562809 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Nov 1 01:28:08.562853 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Nov 1 01:28:08.562898 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Nov 1 01:28:08.562942 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Nov 1 01:28:08.562986 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Nov 1 01:28:08.563031 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Nov 1 01:28:08.563081 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Nov 1 01:28:08.563127 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Nov 1 01:28:08.563173 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Nov 1 01:28:08.563218 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Nov 1 01:28:08.563263 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Nov 1 01:28:08.563307 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Nov 1 01:28:08.563348 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Nov 1 01:28:08.563388 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 01:28:08.563431 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 01:28:08.563472 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 01:28:08.563512 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Nov 1 01:28:08.563551 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Nov 1 01:28:08.563597 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Nov 1 01:28:08.563640 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Nov 1 01:28:08.563685 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Nov 1 01:28:08.563728 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Nov 1 01:28:08.563777 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Nov 1 01:28:08.563819 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Nov 1 01:28:08.563863 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Nov 1 01:28:08.563905 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Nov 1 01:28:08.563949 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Nov 1 01:28:08.563993 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Nov 1 01:28:08.564001 kernel: PCI: CLS 64 bytes, default 64 Nov 1 01:28:08.564007 kernel: DMAR: No ATSR found Nov 1 01:28:08.564013 kernel: DMAR: No SATC found Nov 1 01:28:08.564018 kernel: DMAR: dmar0: Using Queued invalidation Nov 1 01:28:08.564066 kernel: pci 0000:00:00.0: Adding to iommu group 0 Nov 1 01:28:08.564112 kernel: pci 0000:00:01.0: Adding to iommu group 1 Nov 1 01:28:08.564157 kernel: pci 0000:00:08.0: Adding to iommu group 2 Nov 1 01:28:08.564201 kernel: pci 0000:00:12.0: Adding to iommu group 3 Nov 1 01:28:08.564246 kernel: pci 0000:00:14.0: Adding to iommu group 4 Nov 1 01:28:08.564292 kernel: pci 0000:00:14.2: Adding to iommu group 4 Nov 1 01:28:08.564336 kernel: pci 0000:00:15.0: Adding to iommu group 5 Nov 1 01:28:08.564380 kernel: pci 0000:00:15.1: Adding to iommu group 5 Nov 1 01:28:08.564426 kernel: pci 0000:00:16.0: Adding to iommu group 6 Nov 1 01:28:08.564472 kernel: pci 0000:00:16.1: Adding to iommu group 6 Nov 1 01:28:08.564515 kernel: pci 0000:00:16.4: Adding to iommu group 6 Nov 1 01:28:08.564560 kernel: pci 0000:00:17.0: Adding to iommu group 7 Nov 1 01:28:08.564604 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Nov 1 01:28:08.564651 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Nov 1 01:28:08.564696 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Nov 1 01:28:08.564740 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Nov 1 01:28:08.564785 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Nov 1 01:28:08.564828 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Nov 1 01:28:08.564873 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Nov 1 01:28:08.564917 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Nov 1 01:28:08.564961 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Nov 1 01:28:08.565009 kernel: pci 0000:01:00.0: Adding to iommu group 1 Nov 1 01:28:08.565055 kernel: pci 0000:01:00.1: Adding to iommu group 1 Nov 1 01:28:08.565100 kernel: pci 0000:03:00.0: Adding to iommu group 15 Nov 1 01:28:08.565147 kernel: pci 0000:04:00.0: Adding to iommu group 16 Nov 1 01:28:08.565193 kernel: pci 0000:06:00.0: Adding to iommu group 17 Nov 1 01:28:08.565241 kernel: pci 0000:07:00.0: Adding to iommu group 17 Nov 1 01:28:08.565249 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Nov 1 01:28:08.565255 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 1 01:28:08.565262 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Nov 1 01:28:08.565267 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Nov 1 01:28:08.565273 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Nov 1 01:28:08.565278 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Nov 1 01:28:08.565283 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Nov 1 01:28:08.565333 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Nov 1 01:28:08.565341 kernel: Initialise system trusted keyrings Nov 1 01:28:08.565346 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Nov 1 01:28:08.565353 kernel: Key type asymmetric registered Nov 1 01:28:08.565358 kernel: Asymmetric key parser 'x509' registered Nov 1 01:28:08.565364 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 1 01:28:08.565369 kernel: io scheduler mq-deadline registered Nov 1 01:28:08.565375 kernel: io scheduler kyber registered Nov 1 01:28:08.565380 kernel: io scheduler bfq registered Nov 1 01:28:08.565427 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Nov 1 01:28:08.565473 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Nov 1 01:28:08.565518 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Nov 1 01:28:08.565565 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Nov 1 01:28:08.565609 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Nov 1 01:28:08.565673 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Nov 1 01:28:08.565722 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Nov 1 01:28:08.565730 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Nov 1 01:28:08.565736 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Nov 1 01:28:08.565741 kernel: pstore: Registered erst as persistent store backend Nov 1 01:28:08.565746 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 01:28:08.565753 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 01:28:08.565758 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 01:28:08.565763 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 1 01:28:08.565769 kernel: hpet_acpi_add: no address or irqs in _CRS Nov 1 01:28:08.565812 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Nov 1 01:28:08.565820 kernel: i8042: PNP: No PS/2 controller found. Nov 1 01:28:08.565861 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Nov 1 01:28:08.565901 kernel: rtc_cmos rtc_cmos: registered as rtc0 Nov 1 01:28:08.565945 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-11-01T01:28:07 UTC (1761960487) Nov 1 01:28:08.565985 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Nov 1 01:28:08.565992 kernel: intel_pstate: Intel P-state driver initializing Nov 1 01:28:08.565998 kernel: intel_pstate: Disabling energy efficiency optimization Nov 1 01:28:08.566003 kernel: intel_pstate: HWP enabled Nov 1 01:28:08.566008 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Nov 1 01:28:08.566013 kernel: vesafb: scrolling: redraw Nov 1 01:28:08.566019 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Nov 1 01:28:08.566025 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x000000007c151d27, using 768k, total 768k Nov 1 01:28:08.566030 kernel: Console: switching to colour frame buffer device 128x48 Nov 1 01:28:08.566036 kernel: fb0: VESA VGA frame buffer device Nov 1 01:28:08.566041 kernel: NET: Registered PF_INET6 protocol family Nov 1 01:28:08.566046 kernel: Segment Routing with IPv6 Nov 1 01:28:08.566052 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 01:28:08.566057 kernel: NET: Registered PF_PACKET protocol family Nov 1 01:28:08.566062 kernel: Key type dns_resolver registered Nov 1 01:28:08.566067 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Nov 1 01:28:08.566073 kernel: microcode: Microcode Update Driver: v2.2. Nov 1 01:28:08.566078 kernel: IPI shorthand broadcast: enabled Nov 1 01:28:08.566084 kernel: sched_clock: Marking stable (1683550339, 1339989423)->(4477605462, -1454065700) Nov 1 01:28:08.566089 kernel: registered taskstats version 1 Nov 1 01:28:08.566094 kernel: Loading compiled-in X.509 certificates Nov 1 01:28:08.566099 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: f2055682e6899ad8548fd369019e7b47939b46a0' Nov 1 01:28:08.566104 kernel: Key type .fscrypt registered Nov 1 01:28:08.566110 kernel: Key type fscrypt-provisioning registered Nov 1 01:28:08.566115 kernel: pstore: Using crash dump compression: deflate Nov 1 01:28:08.566121 kernel: ima: Allocated hash algorithm: sha1 Nov 1 01:28:08.566126 kernel: ima: No architecture policies found Nov 1 01:28:08.566131 kernel: clk: Disabling unused clocks Nov 1 01:28:08.566137 kernel: Freeing unused kernel image (initmem) memory: 47496K Nov 1 01:28:08.566142 kernel: Write protecting the kernel read-only data: 28672k Nov 1 01:28:08.566147 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Nov 1 01:28:08.566152 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Nov 1 01:28:08.566158 kernel: Run /init as init process Nov 1 01:28:08.566163 kernel: with arguments: Nov 1 01:28:08.566169 kernel: /init Nov 1 01:28:08.566174 kernel: with environment: Nov 1 01:28:08.566179 kernel: HOME=/ Nov 1 01:28:08.566184 kernel: TERM=linux Nov 1 01:28:08.566189 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 1 01:28:08.566195 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 01:28:08.566202 systemd[1]: Detected architecture x86-64. Nov 1 01:28:08.566208 systemd[1]: Running in initrd. Nov 1 01:28:08.566214 systemd[1]: No hostname configured, using default hostname. Nov 1 01:28:08.566219 systemd[1]: Hostname set to . Nov 1 01:28:08.566224 systemd[1]: Initializing machine ID from random generator. Nov 1 01:28:08.566230 systemd[1]: Queued start job for default target initrd.target. Nov 1 01:28:08.566235 systemd[1]: Started systemd-ask-password-console.path. Nov 1 01:28:08.566241 systemd[1]: Reached target cryptsetup.target. Nov 1 01:28:08.566246 systemd[1]: Reached target paths.target. Nov 1 01:28:08.566251 systemd[1]: Reached target slices.target. Nov 1 01:28:08.566257 systemd[1]: Reached target swap.target. Nov 1 01:28:08.566263 systemd[1]: Reached target timers.target. Nov 1 01:28:08.566268 systemd[1]: Listening on iscsid.socket. Nov 1 01:28:08.566274 systemd[1]: Listening on iscsiuio.socket. Nov 1 01:28:08.566279 systemd[1]: Listening on systemd-journald-audit.socket. Nov 1 01:28:08.566284 systemd[1]: Listening on systemd-journald-dev-log.socket. Nov 1 01:28:08.566290 systemd[1]: Listening on systemd-journald.socket. Nov 1 01:28:08.566295 systemd[1]: Listening on systemd-networkd.socket. Nov 1 01:28:08.566301 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 01:28:08.566307 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz Nov 1 01:28:08.566312 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 01:28:08.566318 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns Nov 1 01:28:08.566323 kernel: clocksource: Switched to clocksource tsc Nov 1 01:28:08.566328 systemd[1]: Reached target sockets.target. Nov 1 01:28:08.566334 systemd[1]: Starting kmod-static-nodes.service... Nov 1 01:28:08.566339 systemd[1]: Finished network-cleanup.service. Nov 1 01:28:08.566344 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 01:28:08.566350 systemd[1]: Starting systemd-journald.service... Nov 1 01:28:08.566356 systemd[1]: Starting systemd-modules-load.service... Nov 1 01:28:08.566364 systemd-journald[267]: Journal started Nov 1 01:28:08.566388 systemd-journald[267]: Runtime Journal (/run/log/journal/8b05bb21d79c455f852890cc6d769f92) is 8.0M, max 640.1M, 632.1M free. Nov 1 01:28:08.568606 systemd-modules-load[268]: Inserted module 'overlay' Nov 1 01:28:08.572000 audit: BPF prog-id=6 op=LOAD Nov 1 01:28:08.591460 kernel: audit: type=1334 audit(1761960488.572:2): prog-id=6 op=LOAD Nov 1 01:28:08.591492 systemd[1]: Starting systemd-resolved.service... Nov 1 01:28:08.641442 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 01:28:08.641472 systemd[1]: Starting systemd-vconsole-setup.service... Nov 1 01:28:08.674436 kernel: Bridge firewalling registered Nov 1 01:28:08.674453 systemd[1]: Started systemd-journald.service. Nov 1 01:28:08.688481 systemd-modules-load[268]: Inserted module 'br_netfilter' Nov 1 01:28:08.737346 kernel: audit: type=1130 audit(1761960488.696:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:08.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:08.691231 systemd-resolved[270]: Positive Trust Anchors: Nov 1 01:28:08.801442 kernel: SCSI subsystem initialized Nov 1 01:28:08.801454 kernel: audit: type=1130 audit(1761960488.749:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:08.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:08.691238 systemd-resolved[270]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 01:28:08.917490 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 01:28:08.917502 kernel: audit: type=1130 audit(1761960488.821:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:08.917510 kernel: device-mapper: uevent: version 1.0.3 Nov 1 01:28:08.917517 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Nov 1 01:28:08.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:08.691257 systemd-resolved[270]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 01:28:08.989688 kernel: audit: type=1130 audit(1761960488.924:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:08.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:08.692861 systemd-resolved[270]: Defaulting to hostname 'linux'. Nov 1 01:28:08.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:08.697639 systemd[1]: Started systemd-resolved.service. Nov 1 01:28:09.101585 kernel: audit: type=1130 audit(1761960488.996:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:09.101614 kernel: audit: type=1130 audit(1761960489.054:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:09.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:08.750580 systemd[1]: Finished kmod-static-nodes.service. Nov 1 01:28:08.822563 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 01:28:08.920384 systemd-modules-load[268]: Inserted module 'dm_multipath' Nov 1 01:28:08.925712 systemd[1]: Finished systemd-modules-load.service. Nov 1 01:28:08.998090 systemd[1]: Finished systemd-vconsole-setup.service. Nov 1 01:28:09.055657 systemd[1]: Reached target nss-lookup.target. Nov 1 01:28:09.110998 systemd[1]: Starting dracut-cmdline-ask.service... Nov 1 01:28:09.130928 systemd[1]: Starting systemd-sysctl.service... Nov 1 01:28:09.131236 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Nov 1 01:28:09.134055 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Nov 1 01:28:09.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:09.134627 systemd[1]: Finished systemd-sysctl.service. Nov 1 01:28:09.183400 kernel: audit: type=1130 audit(1761960489.132:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:09.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:09.196754 systemd[1]: Finished dracut-cmdline-ask.service. Nov 1 01:28:09.263489 kernel: audit: type=1130 audit(1761960489.195:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:09.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:09.254991 systemd[1]: Starting dracut-cmdline.service... Nov 1 01:28:09.278503 dracut-cmdline[294]: dracut-dracut-053 Nov 1 01:28:09.278503 dracut-cmdline[294]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Nov 1 01:28:09.278503 dracut-cmdline[294]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 01:28:09.348484 kernel: Loading iSCSI transport class v2.0-870. Nov 1 01:28:09.348498 kernel: iscsi: registered transport (tcp) Nov 1 01:28:09.406186 kernel: iscsi: registered transport (qla4xxx) Nov 1 01:28:09.406203 kernel: QLogic iSCSI HBA Driver Nov 1 01:28:09.422417 systemd[1]: Finished dracut-cmdline.service. Nov 1 01:28:09.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:09.431118 systemd[1]: Starting dracut-pre-udev.service... Nov 1 01:28:09.488466 kernel: raid6: avx2x4 gen() 48789 MB/s Nov 1 01:28:09.523465 kernel: raid6: avx2x4 xor() 21645 MB/s Nov 1 01:28:09.558469 kernel: raid6: avx2x2 gen() 53591 MB/s Nov 1 01:28:09.593467 kernel: raid6: avx2x2 xor() 31993 MB/s Nov 1 01:28:09.628430 kernel: raid6: avx2x1 gen() 45061 MB/s Nov 1 01:28:09.663429 kernel: raid6: avx2x1 xor() 27857 MB/s Nov 1 01:28:09.698465 kernel: raid6: sse2x4 gen() 21307 MB/s Nov 1 01:28:09.733467 kernel: raid6: sse2x4 xor() 11966 MB/s Nov 1 01:28:09.766469 kernel: raid6: sse2x2 gen() 21654 MB/s Nov 1 01:28:09.800464 kernel: raid6: sse2x2 xor() 13373 MB/s Nov 1 01:28:09.834465 kernel: raid6: sse2x1 gen() 18229 MB/s Nov 1 01:28:09.886346 kernel: raid6: sse2x1 xor() 8886 MB/s Nov 1 01:28:09.886364 kernel: raid6: using algorithm avx2x2 gen() 53591 MB/s Nov 1 01:28:09.886372 kernel: raid6: .... xor() 31993 MB/s, rmw enabled Nov 1 01:28:09.904599 kernel: raid6: using avx2x2 recovery algorithm Nov 1 01:28:09.951442 kernel: xor: automatically using best checksumming function avx Nov 1 01:28:10.031431 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Nov 1 01:28:10.036477 systemd[1]: Finished dracut-pre-udev.service. Nov 1 01:28:10.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:10.044000 audit: BPF prog-id=7 op=LOAD Nov 1 01:28:10.044000 audit: BPF prog-id=8 op=LOAD Nov 1 01:28:10.046303 systemd[1]: Starting systemd-udevd.service... Nov 1 01:28:10.054366 systemd-udevd[475]: Using default interface naming scheme 'v252'. Nov 1 01:28:10.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:10.059540 systemd[1]: Started systemd-udevd.service. Nov 1 01:28:10.102525 dracut-pre-trigger[486]: rd.md=0: removing MD RAID activation Nov 1 01:28:10.078024 systemd[1]: Starting dracut-pre-trigger.service... Nov 1 01:28:10.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:10.105628 systemd[1]: Finished dracut-pre-trigger.service. Nov 1 01:28:10.120780 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 01:28:10.174129 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 01:28:10.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:10.203433 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 01:28:10.205403 kernel: libata version 3.00 loaded. Nov 1 01:28:10.222404 kernel: ACPI: bus type USB registered Nov 1 01:28:10.222427 kernel: usbcore: registered new interface driver usbfs Nov 1 01:28:10.258426 kernel: usbcore: registered new interface driver hub Nov 1 01:28:10.275968 kernel: usbcore: registered new device driver usb Nov 1 01:28:10.317876 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 01:28:10.317920 kernel: AES CTR mode by8 optimization enabled Nov 1 01:28:10.318403 kernel: ahci 0000:00:17.0: version 3.0 Nov 1 01:28:10.791562 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Nov 1 01:28:10.791638 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Nov 1 01:28:11.237491 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Nov 1 01:28:11.237555 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Nov 1 01:28:11.237611 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Nov 1 01:28:11.237619 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Nov 1 01:28:11.237626 kernel: scsi host0: ahci Nov 1 01:28:11.237690 kernel: scsi host1: ahci Nov 1 01:28:11.237745 kernel: scsi host2: ahci Nov 1 01:28:11.237800 kernel: igb 0000:03:00.0: added PHC on eth0 Nov 1 01:28:11.237855 kernel: scsi host3: ahci Nov 1 01:28:11.237913 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Nov 1 01:28:11.237966 kernel: scsi host4: ahci Nov 1 01:28:11.238021 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:32:40 Nov 1 01:28:11.238073 kernel: scsi host5: ahci Nov 1 01:28:11.238127 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Nov 1 01:28:11.238180 kernel: scsi host6: ahci Nov 1 01:28:11.238234 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Nov 1 01:28:11.238285 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Nov 1 01:28:11.238293 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Nov 1 01:28:11.238344 kernel: igb 0000:04:00.0: added PHC on eth1 Nov 1 01:28:11.238399 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Nov 1 01:28:11.238451 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:32:41 Nov 1 01:28:11.238502 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Nov 1 01:28:11.238555 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Nov 1 01:28:11.238607 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Nov 1 01:28:11.238614 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Nov 1 01:28:11.238667 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Nov 1 01:28:11.238718 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Nov 1 01:28:11.238726 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Nov 1 01:28:11.238775 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Nov 1 01:28:11.238827 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Nov 1 01:28:11.238836 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Nov 1 01:28:11.238884 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Nov 1 01:28:11.238892 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Nov 1 01:28:11.238941 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Nov 1 01:28:11.238948 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Nov 1 01:28:11.238997 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Nov 1 01:28:11.239004 kernel: hub 1-0:1.0: USB hub found Nov 1 01:28:11.239067 kernel: hub 1-0:1.0: 16 ports detected Nov 1 01:28:11.239124 kernel: hub 2-0:1.0: USB hub found Nov 1 01:28:11.239183 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Nov 1 01:28:11.239235 kernel: hub 2-0:1.0: 10 ports detected Nov 1 01:28:11.239290 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Nov 1 01:28:11.239341 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 1 01:28:11.239348 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Nov 1 01:28:11.239355 kernel: ata7: SATA link down (SStatus 0 SControl 300) Nov 1 01:28:11.239363 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 1 01:28:11.239370 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 1 01:28:11.239376 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Nov 1 01:28:11.239382 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 1 01:28:11.239389 kernel: mlx5_core 0000:01:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Nov 1 01:28:11.239444 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Nov 1 01:28:11.239459 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Nov 1 01:28:11.239466 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Nov 1 01:28:11.794304 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Nov 1 01:28:11.794315 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Nov 1 01:28:11.794382 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Nov 1 01:28:11.794390 kernel: ata2.00: Features: NCQ-prio Nov 1 01:28:11.794400 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Nov 1 01:28:11.794407 kernel: ata1.00: Features: NCQ-prio Nov 1 01:28:11.794414 kernel: hub 1-14:1.0: USB hub found Nov 1 01:28:11.794482 kernel: ata2.00: configured for UDMA/133 Nov 1 01:28:11.794491 kernel: hub 1-14:1.0: 4 ports detected Nov 1 01:28:11.794551 kernel: ata1.00: configured for UDMA/133 Nov 1 01:28:11.794558 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Nov 1 01:28:11.936023 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Nov 1 01:28:11.936192 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 01:28:11.936210 kernel: ata2.00: Enabling discard_zeroes_data Nov 1 01:28:11.936227 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Nov 1 01:28:11.936338 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Nov 1 01:28:11.936443 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Nov 1 01:28:11.936544 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 1 01:28:11.936662 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Nov 1 01:28:11.936769 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Nov 1 01:28:11.936853 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Nov 1 01:28:11.936942 kernel: sd 1:0:0:0: [sdb] Write Protect is off Nov 1 01:28:11.937007 kernel: port_module: 9 callbacks suppressed Nov 1 01:28:11.937017 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Nov 1 01:28:11.937072 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Nov 1 01:28:11.937129 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 1 01:28:11.937187 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Nov 1 01:28:11.937244 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 01:28:11.937251 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 1 01:28:11.937307 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 01:28:11.937316 kernel: ata2.00: Enabling discard_zeroes_data Nov 1 01:28:11.937322 kernel: GPT:9289727 != 937703087 Nov 1 01:28:11.937329 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Nov 1 01:28:11.937468 kernel: ata2.00: Enabling discard_zeroes_data Nov 1 01:28:11.937476 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 01:28:11.937483 kernel: mlx5_core 0000:01:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Nov 1 01:28:11.937541 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Nov 1 01:28:11.937599 kernel: GPT:9289727 != 937703087 Nov 1 01:28:11.937607 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 01:28:11.937614 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 01:28:11.937621 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 01:28:11.937627 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 1 01:28:11.937683 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 1 01:28:11.937691 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Nov 1 01:28:11.969358 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Nov 1 01:28:12.022145 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by (udev-worker) (519) Nov 1 01:28:12.022165 kernel: usbcore: registered new interface driver usbhid Nov 1 01:28:12.022172 kernel: usbhid: USB HID core driver Nov 1 01:28:11.991390 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Nov 1 01:28:12.073952 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Nov 1 01:28:12.074031 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Nov 1 01:28:12.043047 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Nov 1 01:28:12.083444 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Nov 1 01:28:12.105389 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 01:28:12.286444 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Nov 1 01:28:12.286609 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 01:28:12.286619 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Nov 1 01:28:12.286626 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 01:28:12.286633 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Nov 1 01:28:12.286703 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 01:28:12.130996 systemd[1]: Starting disk-uuid.service... Nov 1 01:28:12.317473 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 01:28:12.317502 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 01:28:12.317696 disk-uuid[691]: Primary Header is updated. Nov 1 01:28:12.317696 disk-uuid[691]: Secondary Entries is updated. Nov 1 01:28:12.317696 disk-uuid[691]: Secondary Header is updated. Nov 1 01:28:12.357495 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 01:28:13.323411 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 01:28:13.343164 disk-uuid[692]: The operation has completed successfully. Nov 1 01:28:13.351520 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 01:28:13.380656 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 01:28:13.480341 kernel: audit: type=1130 audit(1761960493.387:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:13.480369 kernel: audit: type=1131 audit(1761960493.387:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:13.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:13.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:13.380703 systemd[1]: Finished disk-uuid.service. Nov 1 01:28:13.510498 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 1 01:28:13.394397 systemd[1]: Starting verity-setup.service... Nov 1 01:28:13.546511 systemd[1]: Found device dev-mapper-usr.device. Nov 1 01:28:13.555428 systemd[1]: Mounting sysusr-usr.mount... Nov 1 01:28:13.562652 systemd[1]: Finished verity-setup.service. Nov 1 01:28:13.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:13.629452 kernel: audit: type=1130 audit(1761960493.580:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:13.684892 systemd[1]: Mounted sysusr-usr.mount. Nov 1 01:28:13.699506 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Nov 1 01:28:13.691704 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Nov 1 01:28:13.782296 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 01:28:13.782311 kernel: BTRFS info (device sda6): using free space tree Nov 1 01:28:13.782319 kernel: BTRFS info (device sda6): has skinny extents Nov 1 01:28:13.782326 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 1 01:28:13.692117 systemd[1]: Starting ignition-setup.service... Nov 1 01:28:13.714868 systemd[1]: Starting parse-ip-for-networkd.service... Nov 1 01:28:13.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:13.790809 systemd[1]: Finished ignition-setup.service. Nov 1 01:28:13.912967 kernel: audit: type=1130 audit(1761960493.806:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:13.912982 kernel: audit: type=1130 audit(1761960493.862:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:13.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:13.807755 systemd[1]: Finished parse-ip-for-networkd.service. Nov 1 01:28:13.944457 kernel: audit: type=1334 audit(1761960493.921:24): prog-id=9 op=LOAD Nov 1 01:28:13.921000 audit: BPF prog-id=9 op=LOAD Nov 1 01:28:13.864057 systemd[1]: Starting ignition-fetch-offline.service... Nov 1 01:28:13.923440 systemd[1]: Starting systemd-networkd.service... Nov 1 01:28:13.961809 systemd-networkd[881]: lo: Link UP Nov 1 01:28:13.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:13.997525 ignition[869]: Ignition 2.14.0 Nov 1 01:28:14.040547 kernel: audit: type=1130 audit(1761960493.974:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:13.961811 systemd-networkd[881]: lo: Gained carrier Nov 1 01:28:13.997529 ignition[869]: Stage: fetch-offline Nov 1 01:28:14.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:13.962143 systemd-networkd[881]: Enumeration completed Nov 1 01:28:14.189505 kernel: audit: type=1130 audit(1761960494.054:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:14.189591 kernel: audit: type=1130 audit(1761960494.114:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:14.189600 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Nov 1 01:28:14.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:13.997556 ignition[869]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 01:28:14.216470 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f1np1: link becomes ready Nov 1 01:28:13.962211 systemd[1]: Started systemd-networkd.service. Nov 1 01:28:13.997570 ignition[869]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Nov 1 01:28:13.962860 systemd-networkd[881]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 01:28:14.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:14.004548 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:28:14.274489 iscsid[902]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Nov 1 01:28:14.274489 iscsid[902]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Nov 1 01:28:14.274489 iscsid[902]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Nov 1 01:28:14.274489 iscsid[902]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Nov 1 01:28:14.274489 iscsid[902]: If using hardware iscsi like qla4xxx this message can be ignored. Nov 1 01:28:14.274489 iscsid[902]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Nov 1 01:28:14.274489 iscsid[902]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Nov 1 01:28:14.430606 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Nov 1 01:28:14.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:13.975499 systemd[1]: Reached target network.target. Nov 1 01:28:14.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:14.004618 ignition[869]: parsed url from cmdline: "" Nov 1 01:28:14.006285 unknown[869]: fetched base config from "system" Nov 1 01:28:14.004621 ignition[869]: no config URL provided Nov 1 01:28:14.006289 unknown[869]: fetched user config from "system" Nov 1 01:28:14.004623 ignition[869]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 01:28:14.034978 systemd[1]: Starting iscsiuio.service... Nov 1 01:28:14.004648 ignition[869]: parsing config with SHA512: 03a99dac013b4da9811b767b77c1ff6b4d15084ca6fef31f9e294246f0d4ff83d7f15543b638af4b943b138f9332c9b9b6eb8060b854c0eeb135945065897cdd Nov 1 01:28:14.047687 systemd[1]: Started iscsiuio.service. Nov 1 01:28:14.006594 ignition[869]: fetch-offline: fetch-offline passed Nov 1 01:28:14.055769 systemd[1]: Finished ignition-fetch-offline.service. Nov 1 01:28:14.006597 ignition[869]: POST message to Packet Timeline Nov 1 01:28:14.115649 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 1 01:28:14.006601 ignition[869]: POST Status error: resource requires networking Nov 1 01:28:14.116109 systemd[1]: Starting ignition-kargs.service... Nov 1 01:28:14.006638 ignition[869]: Ignition finished successfully Nov 1 01:28:14.191907 systemd-networkd[881]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 01:28:14.194372 ignition[892]: Ignition 2.14.0 Nov 1 01:28:14.204035 systemd[1]: Starting iscsid.service... Nov 1 01:28:14.194376 ignition[892]: Stage: kargs Nov 1 01:28:14.230693 systemd[1]: Started iscsid.service. Nov 1 01:28:14.194487 ignition[892]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 01:28:14.244919 systemd[1]: Starting dracut-initqueue.service... Nov 1 01:28:14.194497 ignition[892]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Nov 1 01:28:14.264607 systemd[1]: Finished dracut-initqueue.service. Nov 1 01:28:14.196588 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:28:14.283625 systemd[1]: Reached target remote-fs-pre.target. Nov 1 01:28:14.197139 ignition[892]: kargs: kargs passed Nov 1 01:28:14.327631 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 01:28:14.197143 ignition[892]: POST message to Packet Timeline Nov 1 01:28:14.357122 systemd[1]: Reached target remote-fs.target. Nov 1 01:28:14.197153 ignition[892]: GET https://metadata.packet.net/metadata: attempt #1 Nov 1 01:28:14.373125 systemd[1]: Starting dracut-pre-mount.service... Nov 1 01:28:14.200663 ignition[892]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:59966->[::1]:53: read: connection refused Nov 1 01:28:14.404854 systemd-networkd[881]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 01:28:14.401019 ignition[892]: GET https://metadata.packet.net/metadata: attempt #2 Nov 1 01:28:14.419612 systemd[1]: Finished dracut-pre-mount.service. Nov 1 01:28:14.401282 ignition[892]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:60549->[::1]:53: read: connection refused Nov 1 01:28:14.432497 systemd-networkd[881]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 01:28:14.461107 systemd-networkd[881]: enp1s0f1np1: Link UP Nov 1 01:28:14.461224 systemd-networkd[881]: enp1s0f1np1: Gained carrier Nov 1 01:28:14.471669 systemd-networkd[881]: enp1s0f0np0: Link UP Nov 1 01:28:14.471806 systemd-networkd[881]: eno2: Link UP Nov 1 01:28:14.471931 systemd-networkd[881]: eno1: Link UP Nov 1 01:28:14.802132 ignition[892]: GET https://metadata.packet.net/metadata: attempt #3 Nov 1 01:28:14.803094 ignition[892]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:58991->[::1]:53: read: connection refused Nov 1 01:28:15.242192 systemd-networkd[881]: enp1s0f0np0: Gained carrier Nov 1 01:28:15.250629 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f0np0: link becomes ready Nov 1 01:28:15.272651 systemd-networkd[881]: enp1s0f0np0: DHCPv4 address 139.178.94.15/31, gateway 139.178.94.14 acquired from 145.40.83.140 Nov 1 01:28:15.538886 systemd-networkd[881]: enp1s0f1np1: Gained IPv6LL Nov 1 01:28:15.603766 ignition[892]: GET https://metadata.packet.net/metadata: attempt #4 Nov 1 01:28:15.604904 ignition[892]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:55229->[::1]:53: read: connection refused Nov 1 01:28:16.818887 systemd-networkd[881]: enp1s0f0np0: Gained IPv6LL Nov 1 01:28:17.206691 ignition[892]: GET https://metadata.packet.net/metadata: attempt #5 Nov 1 01:28:17.207784 ignition[892]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:49334->[::1]:53: read: connection refused Nov 1 01:28:20.408363 ignition[892]: GET https://metadata.packet.net/metadata: attempt #6 Nov 1 01:28:21.477234 ignition[892]: GET result: OK Nov 1 01:28:21.905037 ignition[892]: Ignition finished successfully Nov 1 01:28:21.909843 systemd[1]: Finished ignition-kargs.service. Nov 1 01:28:21.996081 kernel: kauditd_printk_skb: 3 callbacks suppressed Nov 1 01:28:21.996122 kernel: audit: type=1130 audit(1761960501.919:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:21.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:21.929249 ignition[920]: Ignition 2.14.0 Nov 1 01:28:21.923392 systemd[1]: Starting ignition-disks.service... Nov 1 01:28:21.929253 ignition[920]: Stage: disks Nov 1 01:28:21.929330 ignition[920]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 01:28:21.929339 ignition[920]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Nov 1 01:28:21.931803 ignition[920]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:28:21.932382 ignition[920]: disks: disks passed Nov 1 01:28:21.932385 ignition[920]: POST message to Packet Timeline Nov 1 01:28:21.932395 ignition[920]: GET https://metadata.packet.net/metadata: attempt #1 Nov 1 01:28:22.978966 ignition[920]: GET result: OK Nov 1 01:28:23.355992 ignition[920]: Ignition finished successfully Nov 1 01:28:23.358953 systemd[1]: Finished ignition-disks.service. Nov 1 01:28:23.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:23.373073 systemd[1]: Reached target initrd-root-device.target. Nov 1 01:28:23.452674 kernel: audit: type=1130 audit(1761960503.371:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:23.437654 systemd[1]: Reached target local-fs-pre.target. Nov 1 01:28:23.437700 systemd[1]: Reached target local-fs.target. Nov 1 01:28:23.460658 systemd[1]: Reached target sysinit.target. Nov 1 01:28:23.475709 systemd[1]: Reached target basic.target. Nov 1 01:28:23.491986 systemd[1]: Starting systemd-fsck-root.service... Nov 1 01:28:23.524372 systemd-fsck[936]: ROOT: clean, 637/553520 files, 56032/553472 blocks Nov 1 01:28:23.534763 systemd[1]: Finished systemd-fsck-root.service. Nov 1 01:28:23.625731 kernel: audit: type=1130 audit(1761960503.543:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:23.625755 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Nov 1 01:28:23.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:23.547756 systemd[1]: Mounting sysroot.mount... Nov 1 01:28:23.634840 systemd[1]: Mounted sysroot.mount. Nov 1 01:28:23.648762 systemd[1]: Reached target initrd-root-fs.target. Nov 1 01:28:23.665212 systemd[1]: Mounting sysroot-usr.mount... Nov 1 01:28:23.681744 systemd[1]: Starting flatcar-metadata-hostname.service... Nov 1 01:28:23.695322 systemd[1]: Starting flatcar-static-network.service... Nov 1 01:28:23.710634 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 01:28:23.710726 systemd[1]: Reached target ignition-diskful.target. Nov 1 01:28:23.729608 systemd[1]: Mounted sysroot-usr.mount. Nov 1 01:28:23.752938 systemd[1]: Mounting sysroot-usr-share-oem.mount... Nov 1 01:28:23.765332 systemd[1]: Starting initrd-setup-root.service... Nov 1 01:28:23.898355 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (945) Nov 1 01:28:23.898376 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 01:28:23.898386 kernel: BTRFS info (device sda6): using free space tree Nov 1 01:28:23.898393 kernel: BTRFS info (device sda6): has skinny extents Nov 1 01:28:23.898406 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 1 01:28:23.834300 systemd[1]: Finished initrd-setup-root.service. Nov 1 01:28:23.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:23.960535 coreos-metadata[944]: Nov 01 01:28:23.845 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 1 01:28:23.981655 kernel: audit: type=1130 audit(1761960503.906:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:23.981669 coreos-metadata[943]: Nov 01 01:28:23.845 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 1 01:28:24.001652 initrd-setup-root[952]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 01:28:23.909718 systemd[1]: Mounted sysroot-usr-share-oem.mount. Nov 1 01:28:24.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:24.050595 initrd-setup-root[962]: cut: /sysroot/etc/group: No such file or directory Nov 1 01:28:24.081651 kernel: audit: type=1130 audit(1761960504.016:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:23.969014 systemd[1]: Starting ignition-mount.service... Nov 1 01:28:24.088677 initrd-setup-root[970]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 01:28:23.988988 systemd[1]: Starting sysroot-boot.service... Nov 1 01:28:24.105684 initrd-setup-root[978]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 01:28:24.008824 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Nov 1 01:28:24.125567 ignition[1020]: INFO : Ignition 2.14.0 Nov 1 01:28:24.125567 ignition[1020]: INFO : Stage: mount Nov 1 01:28:24.125567 ignition[1020]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 01:28:24.125567 ignition[1020]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Nov 1 01:28:24.125567 ignition[1020]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:28:24.125567 ignition[1020]: INFO : mount: mount passed Nov 1 01:28:24.125567 ignition[1020]: INFO : POST message to Packet Timeline Nov 1 01:28:24.125567 ignition[1020]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 1 01:28:24.008871 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Nov 1 01:28:24.011318 systemd[1]: Finished sysroot-boot.service. Nov 1 01:28:24.965835 coreos-metadata[944]: Nov 01 01:28:24.965 INFO Fetch successful Nov 1 01:28:25.046483 systemd[1]: flatcar-static-network.service: Deactivated successfully. Nov 1 01:28:25.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:25.046544 systemd[1]: Finished flatcar-static-network.service. Nov 1 01:28:25.178644 kernel: audit: type=1130 audit(1761960505.054:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:25.178659 kernel: audit: type=1131 audit(1761960505.054:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:25.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:25.178687 ignition[1020]: INFO : GET result: OK Nov 1 01:28:25.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:25.215518 coreos-metadata[943]: Nov 01 01:28:25.090 INFO Fetch successful Nov 1 01:28:25.215518 coreos-metadata[943]: Nov 01 01:28:25.125 INFO wrote hostname ci-3510.3.8-n-34cd8b9336 to /sysroot/etc/hostname Nov 1 01:28:25.266612 kernel: audit: type=1130 audit(1761960505.186:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:25.125890 systemd[1]: Finished flatcar-metadata-hostname.service. Nov 1 01:28:25.494419 ignition[1020]: INFO : Ignition finished successfully Nov 1 01:28:25.495419 systemd[1]: Finished ignition-mount.service. Nov 1 01:28:25.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:25.513105 systemd[1]: Starting ignition-files.service... Nov 1 01:28:25.581499 kernel: audit: type=1130 audit(1761960505.510:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:25.576253 systemd[1]: Mounting sysroot-usr-share-oem.mount... Nov 1 01:28:25.628520 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (1032) Nov 1 01:28:25.628530 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 01:28:25.663109 kernel: BTRFS info (device sda6): using free space tree Nov 1 01:28:25.663126 kernel: BTRFS info (device sda6): has skinny extents Nov 1 01:28:25.711437 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 1 01:28:25.713402 systemd[1]: Mounted sysroot-usr-share-oem.mount. Nov 1 01:28:25.730529 ignition[1051]: INFO : Ignition 2.14.0 Nov 1 01:28:25.730529 ignition[1051]: INFO : Stage: files Nov 1 01:28:25.730529 ignition[1051]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 01:28:25.730529 ignition[1051]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Nov 1 01:28:25.730529 ignition[1051]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:28:25.730529 ignition[1051]: DEBUG : files: compiled without relabeling support, skipping Nov 1 01:28:25.730529 ignition[1051]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 01:28:25.730529 ignition[1051]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 01:28:25.732241 unknown[1051]: wrote ssh authorized keys file for user: core Nov 1 01:28:25.833488 ignition[1051]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 01:28:25.833488 ignition[1051]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 01:28:25.833488 ignition[1051]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 01:28:25.833488 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 01:28:25.833488 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 1 01:28:25.833488 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 1 01:28:25.911505 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 01:28:25.911505 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 1 01:28:25.911505 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 01:28:25.911505 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 01:28:25.911505 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 01:28:25.911505 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 01:28:25.911505 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 01:28:25.911505 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 01:28:25.911505 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 01:28:25.911505 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 01:28:25.911505 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 01:28:25.911505 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 01:28:25.911505 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 01:28:25.911505 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Nov 1 01:28:25.911505 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(a): oem config not found in "/usr/share/oem", looking on oem partition Nov 1 01:28:25.895455 systemd[1]: mnt-oem890309440.mount: Deactivated successfully. Nov 1 01:28:26.173644 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem890309440" Nov 1 01:28:26.173644 ignition[1051]: CRITICAL : files: createFilesystemsFiles: createFiles: op(a): op(b): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem890309440": device or resource busy Nov 1 01:28:26.173644 ignition[1051]: ERROR : files: createFilesystemsFiles: createFiles: op(a): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem890309440", trying btrfs: device or resource busy Nov 1 01:28:26.173644 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem890309440" Nov 1 01:28:26.173644 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem890309440" Nov 1 01:28:26.173644 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [started] unmounting "/mnt/oem890309440" Nov 1 01:28:26.173644 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [finished] unmounting "/mnt/oem890309440" Nov 1 01:28:26.173644 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Nov 1 01:28:26.173644 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 01:28:26.173644 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 1 01:28:26.486684 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET result: OK Nov 1 01:28:27.539261 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 01:28:27.539261 ignition[1051]: INFO : files: op(f): [started] processing unit "coreos-metadata-sshkeys@.service" Nov 1 01:28:27.539261 ignition[1051]: INFO : files: op(f): [finished] processing unit "coreos-metadata-sshkeys@.service" Nov 1 01:28:27.539261 ignition[1051]: INFO : files: op(10): [started] processing unit "packet-phone-home.service" Nov 1 01:28:27.539261 ignition[1051]: INFO : files: op(10): [finished] processing unit "packet-phone-home.service" Nov 1 01:28:27.539261 ignition[1051]: INFO : files: op(11): [started] processing unit "prepare-helm.service" Nov 1 01:28:27.620728 ignition[1051]: INFO : files: op(11): op(12): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 01:28:27.620728 ignition[1051]: INFO : files: op(11): op(12): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 01:28:27.620728 ignition[1051]: INFO : files: op(11): [finished] processing unit "prepare-helm.service" Nov 1 01:28:27.620728 ignition[1051]: INFO : files: op(13): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Nov 1 01:28:27.620728 ignition[1051]: INFO : files: op(13): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Nov 1 01:28:27.620728 ignition[1051]: INFO : files: op(14): [started] setting preset to enabled for "packet-phone-home.service" Nov 1 01:28:27.620728 ignition[1051]: INFO : files: op(14): [finished] setting preset to enabled for "packet-phone-home.service" Nov 1 01:28:27.620728 ignition[1051]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Nov 1 01:28:27.620728 ignition[1051]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 01:28:27.620728 ignition[1051]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 01:28:27.620728 ignition[1051]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 01:28:27.620728 ignition[1051]: INFO : files: files passed Nov 1 01:28:27.620728 ignition[1051]: INFO : POST message to Packet Timeline Nov 1 01:28:27.620728 ignition[1051]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 1 01:28:28.672765 ignition[1051]: INFO : GET result: OK Nov 1 01:28:29.168798 ignition[1051]: INFO : Ignition finished successfully Nov 1 01:28:29.172036 systemd[1]: Finished ignition-files.service. Nov 1 01:28:29.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:29.191285 systemd[1]: Starting initrd-setup-root-after-ignition.service... Nov 1 01:28:29.263645 kernel: audit: type=1130 audit(1761960509.184:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:29.252645 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Nov 1 01:28:29.288661 initrd-setup-root-after-ignition[1085]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 01:28:29.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:29.252977 systemd[1]: Starting ignition-quench.service... Nov 1 01:28:29.478776 kernel: audit: type=1130 audit(1761960509.297:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:29.478794 kernel: audit: type=1130 audit(1761960509.364:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:29.478802 kernel: audit: type=1131 audit(1761960509.364:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:29.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:29.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:29.270768 systemd[1]: Finished initrd-setup-root-after-ignition.service. Nov 1 01:28:29.298806 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 01:28:29.298885 systemd[1]: Finished ignition-quench.service. Nov 1 01:28:29.634967 kernel: audit: type=1130 audit(1761960509.519:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:29.634978 kernel: audit: type=1131 audit(1761960509.519:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:29.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:29.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:29.365689 systemd[1]: Reached target ignition-complete.target. Nov 1 01:28:29.488012 systemd[1]: Starting initrd-parse-etc.service... Nov 1 01:28:29.502752 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 01:28:29.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:29.502795 systemd[1]: Finished initrd-parse-etc.service. Nov 1 01:28:29.756629 kernel: audit: type=1130 audit(1761960509.683:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:29.520696 systemd[1]: Reached target initrd-fs.target. Nov 1 01:28:29.643623 systemd[1]: Reached target initrd.target. Nov 1 01:28:29.643683 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Nov 1 01:28:29.644038 systemd[1]: Starting dracut-pre-pivot.service... Nov 1 01:28:29.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:29.665766 systemd[1]: Finished dracut-pre-pivot.service. Nov 1 01:28:29.893627 kernel: audit: type=1131 audit(1761960509.816:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:29.685271 systemd[1]: Starting initrd-cleanup.service... Nov 1 01:28:29.751422 systemd[1]: Stopped target nss-lookup.target. Nov 1 01:28:29.765669 systemd[1]: Stopped target remote-cryptsetup.target. Nov 1 01:28:29.782728 systemd[1]: Stopped target timers.target. Nov 1 01:28:29.797709 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 01:28:29.797819 systemd[1]: Stopped dracut-pre-pivot.service. Nov 1 01:28:29.817909 systemd[1]: Stopped target initrd.target. Nov 1 01:28:29.886665 systemd[1]: Stopped target basic.target. Nov 1 01:28:29.893723 systemd[1]: Stopped target ignition-complete.target. Nov 1 01:28:29.915697 systemd[1]: Stopped target ignition-diskful.target. Nov 1 01:28:29.923715 systemd[1]: Stopped target initrd-root-device.target. Nov 1 01:28:30.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:29.945724 systemd[1]: Stopped target remote-fs.target. Nov 1 01:28:30.146626 kernel: audit: type=1131 audit(1761960510.058:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:29.960743 systemd[1]: Stopped target remote-fs-pre.target. Nov 1 01:28:30.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:29.976943 systemd[1]: Stopped target sysinit.target. Nov 1 01:28:30.224652 kernel: audit: type=1131 audit(1761960510.145:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:30.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:29.993041 systemd[1]: Stopped target local-fs.target. Nov 1 01:28:30.010022 systemd[1]: Stopped target local-fs-pre.target. Nov 1 01:28:30.027017 systemd[1]: Stopped target swap.target. Nov 1 01:28:30.042015 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 01:28:30.042391 systemd[1]: Stopped dracut-pre-mount.service. Nov 1 01:28:30.060253 systemd[1]: Stopped target cryptsetup.target. Nov 1 01:28:30.138640 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 01:28:30.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:30.138709 systemd[1]: Stopped dracut-initqueue.service. Nov 1 01:28:30.331000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:30.146750 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 01:28:30.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:30.146810 systemd[1]: Stopped ignition-fetch-offline.service. Nov 1 01:28:30.372626 ignition[1100]: INFO : Ignition 2.14.0 Nov 1 01:28:30.372626 ignition[1100]: INFO : Stage: umount Nov 1 01:28:30.372626 ignition[1100]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 01:28:30.372626 ignition[1100]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Nov 1 01:28:30.372626 ignition[1100]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:28:30.372626 ignition[1100]: INFO : umount: umount passed Nov 1 01:28:30.372626 ignition[1100]: INFO : POST message to Packet Timeline Nov 1 01:28:30.372626 ignition[1100]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 1 01:28:30.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:30.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:30.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:30.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:30.216787 systemd[1]: Stopped target paths.target. Nov 1 01:28:30.231673 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 01:28:30.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:30.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:30.235674 systemd[1]: Stopped systemd-ask-password-console.path. Nov 1 01:28:30.253731 systemd[1]: Stopped target slices.target. Nov 1 01:28:30.268664 systemd[1]: Stopped target sockets.target. Nov 1 01:28:30.285755 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 01:28:30.285858 systemd[1]: Closed iscsid.socket. Nov 1 01:28:30.299917 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 01:28:30.300166 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Nov 1 01:28:30.317079 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 01:28:30.317443 systemd[1]: Stopped ignition-files.service. Nov 1 01:28:30.333129 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 1 01:28:30.333519 systemd[1]: Stopped flatcar-metadata-hostname.service. Nov 1 01:28:30.350289 systemd[1]: Stopping ignition-mount.service... Nov 1 01:28:30.364758 systemd[1]: Stopping iscsiuio.service... Nov 1 01:28:30.381662 systemd[1]: Stopping sysroot-boot.service... Nov 1 01:28:30.393660 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 01:28:30.394060 systemd[1]: Stopped systemd-udev-trigger.service. Nov 1 01:28:30.436808 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 01:28:30.436936 systemd[1]: Stopped dracut-pre-trigger.service. Nov 1 01:28:30.461878 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 01:28:30.462225 systemd[1]: iscsiuio.service: Deactivated successfully. Nov 1 01:28:30.462276 systemd[1]: Stopped iscsiuio.service. Nov 1 01:28:30.465873 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 01:28:30.465918 systemd[1]: Stopped sysroot-boot.service. Nov 1 01:28:30.481020 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 01:28:30.481090 systemd[1]: Closed iscsiuio.socket. Nov 1 01:28:30.503982 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 01:28:30.504084 systemd[1]: Finished initrd-cleanup.service. Nov 1 01:28:31.619193 ignition[1100]: INFO : GET result: OK Nov 1 01:28:32.737946 ignition[1100]: INFO : Ignition finished successfully Nov 1 01:28:32.740730 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 01:28:32.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:32.740980 systemd[1]: Stopped ignition-mount.service. Nov 1 01:28:32.754945 systemd[1]: Stopped target network.target. Nov 1 01:28:32.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:32.770704 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 01:28:32.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:32.770877 systemd[1]: Stopped ignition-disks.service. Nov 1 01:28:32.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:32.786846 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 01:28:32.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:32.787001 systemd[1]: Stopped ignition-kargs.service. Nov 1 01:28:32.802946 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 01:28:32.803101 systemd[1]: Stopped ignition-setup.service. Nov 1 01:28:32.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:32.819853 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 01:28:32.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:32.900000 audit: BPF prog-id=6 op=UNLOAD Nov 1 01:28:32.820009 systemd[1]: Stopped initrd-setup-root.service. Nov 1 01:28:32.836120 systemd[1]: Stopping systemd-networkd.service... Nov 1 01:28:32.845569 systemd-networkd[881]: enp1s0f1np1: DHCPv6 lease lost Nov 1 01:28:32.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:32.851938 systemd[1]: Stopping systemd-resolved.service... Nov 1 01:28:32.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:32.854573 systemd-networkd[881]: enp1s0f0np0: DHCPv6 lease lost Nov 1 01:28:32.970000 audit: BPF prog-id=9 op=UNLOAD Nov 1 01:28:32.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:32.866325 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 01:28:32.866611 systemd[1]: Stopped systemd-resolved.service. Nov 1 01:28:33.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:32.884467 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 01:28:32.884726 systemd[1]: Stopped systemd-networkd.service. Nov 1 01:28:32.900294 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 01:28:33.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:32.900395 systemd[1]: Closed systemd-networkd.socket. Nov 1 01:28:33.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:32.919432 systemd[1]: Stopping network-cleanup.service... Nov 1 01:28:33.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:32.931618 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 01:28:32.931775 systemd[1]: Stopped parse-ip-for-networkd.service. Nov 1 01:28:33.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:32.947788 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 01:28:33.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:32.947940 systemd[1]: Stopped systemd-sysctl.service. Nov 1 01:28:33.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:32.964146 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 01:28:33.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:33.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:32.964300 systemd[1]: Stopped systemd-modules-load.service. Nov 1 01:28:32.981053 systemd[1]: Stopping systemd-udevd.service... Nov 1 01:28:32.999631 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 1 01:28:33.000778 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 01:28:33.000845 systemd[1]: Stopped systemd-udevd.service. Nov 1 01:28:33.014918 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 01:28:33.014952 systemd[1]: Closed systemd-udevd-control.socket. Nov 1 01:28:33.028613 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 01:28:33.028648 systemd[1]: Closed systemd-udevd-kernel.socket. Nov 1 01:28:33.044533 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 01:28:33.044592 systemd[1]: Stopped dracut-pre-udev.service. Nov 1 01:28:33.059727 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 01:28:33.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:33.059835 systemd[1]: Stopped dracut-cmdline.service. Nov 1 01:28:33.075569 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 01:28:33.075593 systemd[1]: Stopped dracut-cmdline-ask.service. Nov 1 01:28:33.090904 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Nov 1 01:28:33.105490 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 01:28:33.360954 iscsid[902]: iscsid shutting down. Nov 1 01:28:33.105541 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Nov 1 01:28:33.123699 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 01:28:33.123771 systemd[1]: Stopped kmod-static-nodes.service. Nov 1 01:28:33.139704 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 01:28:33.139835 systemd[1]: Stopped systemd-vconsole-setup.service. Nov 1 01:28:33.158345 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 1 01:28:33.159797 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 01:28:33.160014 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Nov 1 01:28:33.271023 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 01:28:33.271265 systemd[1]: Stopped network-cleanup.service. Nov 1 01:28:33.283019 systemd[1]: Reached target initrd-switch-root.target. Nov 1 01:28:33.301544 systemd[1]: Starting initrd-switch-root.service... Nov 1 01:28:33.318326 systemd[1]: Switching root. Nov 1 01:28:33.361249 systemd-journald[267]: Journal stopped Nov 1 01:28:37.162480 systemd-journald[267]: Received SIGTERM from PID 1 (n/a). Nov 1 01:28:37.162494 kernel: SELinux: Class mctp_socket not defined in policy. Nov 1 01:28:37.162503 kernel: SELinux: Class anon_inode not defined in policy. Nov 1 01:28:37.162509 kernel: SELinux: the above unknown classes and permissions will be allowed Nov 1 01:28:37.162514 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 01:28:37.162520 kernel: SELinux: policy capability open_perms=1 Nov 1 01:28:37.162526 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 01:28:37.162536 kernel: SELinux: policy capability always_check_network=0 Nov 1 01:28:37.162545 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 01:28:37.162553 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 01:28:37.162560 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 01:28:37.162566 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 01:28:37.162572 systemd[1]: Successfully loaded SELinux policy in 303.085ms. Nov 1 01:28:37.162579 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.435ms. Nov 1 01:28:37.162588 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 01:28:37.162594 systemd[1]: Detected architecture x86-64. Nov 1 01:28:37.162600 systemd[1]: Detected first boot. Nov 1 01:28:37.162606 systemd[1]: Hostname set to . Nov 1 01:28:37.162613 systemd[1]: Initializing machine ID from random generator. Nov 1 01:28:37.162619 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Nov 1 01:28:37.162624 systemd[1]: Populated /etc with preset unit settings. Nov 1 01:28:37.162632 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 01:28:37.162638 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 01:28:37.162645 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 01:28:37.162651 kernel: kauditd_printk_skb: 49 callbacks suppressed Nov 1 01:28:37.162657 kernel: audit: type=1334 audit(1761960515.651:92): prog-id=12 op=LOAD Nov 1 01:28:37.162663 kernel: audit: type=1334 audit(1761960515.651:93): prog-id=3 op=UNLOAD Nov 1 01:28:37.162669 kernel: audit: type=1334 audit(1761960515.696:94): prog-id=13 op=LOAD Nov 1 01:28:37.162675 kernel: audit: type=1334 audit(1761960515.741:95): prog-id=14 op=LOAD Nov 1 01:28:37.162681 kernel: audit: type=1334 audit(1761960515.741:96): prog-id=4 op=UNLOAD Nov 1 01:28:37.162690 kernel: audit: type=1334 audit(1761960515.741:97): prog-id=5 op=UNLOAD Nov 1 01:28:37.162700 kernel: audit: type=1334 audit(1761960515.784:98): prog-id=15 op=LOAD Nov 1 01:28:37.162707 kernel: audit: type=1334 audit(1761960515.784:99): prog-id=12 op=UNLOAD Nov 1 01:28:37.162712 kernel: audit: type=1334 audit(1761960515.826:100): prog-id=16 op=LOAD Nov 1 01:28:37.162718 kernel: audit: type=1334 audit(1761960515.866:101): prog-id=17 op=LOAD Nov 1 01:28:37.162724 systemd[1]: iscsid.service: Deactivated successfully. Nov 1 01:28:37.162731 systemd[1]: Stopped iscsid.service. Nov 1 01:28:37.162738 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 1 01:28:37.162744 systemd[1]: Stopped initrd-switch-root.service. Nov 1 01:28:37.162750 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 1 01:28:37.162759 systemd[1]: Created slice system-addon\x2dconfig.slice. Nov 1 01:28:37.162766 systemd[1]: Created slice system-addon\x2drun.slice. Nov 1 01:28:37.162772 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Nov 1 01:28:37.162778 systemd[1]: Created slice system-getty.slice. Nov 1 01:28:37.162786 systemd[1]: Created slice system-modprobe.slice. Nov 1 01:28:37.162792 systemd[1]: Created slice system-serial\x2dgetty.slice. Nov 1 01:28:37.162799 systemd[1]: Created slice system-system\x2dcloudinit.slice. Nov 1 01:28:37.162805 systemd[1]: Created slice system-systemd\x2dfsck.slice. Nov 1 01:28:37.162811 systemd[1]: Created slice user.slice. Nov 1 01:28:37.162818 systemd[1]: Started systemd-ask-password-console.path. Nov 1 01:28:37.162824 systemd[1]: Started systemd-ask-password-wall.path. Nov 1 01:28:37.162831 systemd[1]: Set up automount boot.automount. Nov 1 01:28:37.162837 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Nov 1 01:28:37.162844 systemd[1]: Stopped target initrd-switch-root.target. Nov 1 01:28:37.162851 systemd[1]: Stopped target initrd-fs.target. Nov 1 01:28:37.162857 systemd[1]: Stopped target initrd-root-fs.target. Nov 1 01:28:37.162864 systemd[1]: Reached target integritysetup.target. Nov 1 01:28:37.162870 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 01:28:37.162877 systemd[1]: Reached target remote-fs.target. Nov 1 01:28:37.162883 systemd[1]: Reached target slices.target. Nov 1 01:28:37.162890 systemd[1]: Reached target swap.target. Nov 1 01:28:37.162897 systemd[1]: Reached target torcx.target. Nov 1 01:28:37.162903 systemd[1]: Reached target veritysetup.target. Nov 1 01:28:37.162909 systemd[1]: Listening on systemd-coredump.socket. Nov 1 01:28:37.162916 systemd[1]: Listening on systemd-initctl.socket. Nov 1 01:28:37.162924 systemd[1]: Listening on systemd-networkd.socket. Nov 1 01:28:37.162930 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 01:28:37.162937 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 01:28:37.162943 systemd[1]: Listening on systemd-userdbd.socket. Nov 1 01:28:37.162950 systemd[1]: Mounting dev-hugepages.mount... Nov 1 01:28:37.162957 systemd[1]: Mounting dev-mqueue.mount... Nov 1 01:28:37.162963 systemd[1]: Mounting media.mount... Nov 1 01:28:37.162970 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:28:37.162977 systemd[1]: Mounting sys-kernel-debug.mount... Nov 1 01:28:37.162984 systemd[1]: Mounting sys-kernel-tracing.mount... Nov 1 01:28:37.162990 systemd[1]: Mounting tmp.mount... Nov 1 01:28:37.162997 systemd[1]: Starting flatcar-tmpfiles.service... Nov 1 01:28:37.163004 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 01:28:37.163010 systemd[1]: Starting kmod-static-nodes.service... Nov 1 01:28:37.163016 systemd[1]: Starting modprobe@configfs.service... Nov 1 01:28:37.163023 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 01:28:37.163029 systemd[1]: Starting modprobe@drm.service... Nov 1 01:28:37.163036 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 01:28:37.163044 systemd[1]: Starting modprobe@fuse.service... Nov 1 01:28:37.163051 kernel: fuse: init (API version 7.34) Nov 1 01:28:37.163057 systemd[1]: Starting modprobe@loop.service... Nov 1 01:28:37.163063 kernel: loop: module loaded Nov 1 01:28:37.163069 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 01:28:37.163076 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 1 01:28:37.163083 systemd[1]: Stopped systemd-fsck-root.service. Nov 1 01:28:37.163089 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 1 01:28:37.163096 systemd[1]: Stopped systemd-fsck-usr.service. Nov 1 01:28:37.163103 systemd[1]: Stopped systemd-journald.service. Nov 1 01:28:37.163110 systemd[1]: Starting systemd-journald.service... Nov 1 01:28:37.163116 systemd[1]: Starting systemd-modules-load.service... Nov 1 01:28:37.163125 systemd-journald[1253]: Journal started Nov 1 01:28:37.163152 systemd-journald[1253]: Runtime Journal (/run/log/journal/0bcd9dccb3d449b4988003fafbd00dc9) is 8.0M, max 640.1M, 632.1M free. Nov 1 01:28:33.723000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 01:28:34.022000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Nov 1 01:28:34.025000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 01:28:34.025000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 01:28:34.025000 audit: BPF prog-id=10 op=LOAD Nov 1 01:28:34.025000 audit: BPF prog-id=10 op=UNLOAD Nov 1 01:28:34.025000 audit: BPF prog-id=11 op=LOAD Nov 1 01:28:34.025000 audit: BPF prog-id=11 op=UNLOAD Nov 1 01:28:34.090000 audit[1140]: AVC avc: denied { associate } for pid=1140 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Nov 1 01:28:34.090000 audit[1140]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001278d2 a1=c00002ce58 a2=c00002b100 a3=32 items=0 ppid=1123 pid=1140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:28:34.090000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Nov 1 01:28:34.116000 audit[1140]: AVC avc: denied { associate } for pid=1140 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Nov 1 01:28:34.116000 audit[1140]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001279a9 a2=1ed a3=0 items=2 ppid=1123 pid=1140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:28:34.116000 audit: CWD cwd="/" Nov 1 01:28:34.116000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:28:34.116000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:28:34.116000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Nov 1 01:28:35.651000 audit: BPF prog-id=12 op=LOAD Nov 1 01:28:35.651000 audit: BPF prog-id=3 op=UNLOAD Nov 1 01:28:35.696000 audit: BPF prog-id=13 op=LOAD Nov 1 01:28:35.741000 audit: BPF prog-id=14 op=LOAD Nov 1 01:28:35.741000 audit: BPF prog-id=4 op=UNLOAD Nov 1 01:28:35.741000 audit: BPF prog-id=5 op=UNLOAD Nov 1 01:28:35.784000 audit: BPF prog-id=15 op=LOAD Nov 1 01:28:35.784000 audit: BPF prog-id=12 op=UNLOAD Nov 1 01:28:35.826000 audit: BPF prog-id=16 op=LOAD Nov 1 01:28:35.866000 audit: BPF prog-id=17 op=LOAD Nov 1 01:28:35.866000 audit: BPF prog-id=13 op=UNLOAD Nov 1 01:28:35.866000 audit: BPF prog-id=14 op=UNLOAD Nov 1 01:28:35.886000 audit: BPF prog-id=18 op=LOAD Nov 1 01:28:35.886000 audit: BPF prog-id=15 op=UNLOAD Nov 1 01:28:35.886000 audit: BPF prog-id=19 op=LOAD Nov 1 01:28:35.886000 audit: BPF prog-id=20 op=LOAD Nov 1 01:28:35.886000 audit: BPF prog-id=16 op=UNLOAD Nov 1 01:28:35.886000 audit: BPF prog-id=17 op=UNLOAD Nov 1 01:28:35.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:35.928000 audit: BPF prog-id=18 op=UNLOAD Nov 1 01:28:35.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:35.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:35.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:37.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:37.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:37.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:37.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:37.134000 audit: BPF prog-id=21 op=LOAD Nov 1 01:28:37.135000 audit: BPF prog-id=22 op=LOAD Nov 1 01:28:37.135000 audit: BPF prog-id=23 op=LOAD Nov 1 01:28:37.135000 audit: BPF prog-id=19 op=UNLOAD Nov 1 01:28:37.135000 audit: BPF prog-id=20 op=UNLOAD Nov 1 01:28:37.159000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Nov 1 01:28:37.159000 audit[1253]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7fff059e2cd0 a2=4000 a3=7fff059e2d6c items=0 ppid=1 pid=1253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:28:37.159000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Nov 1 01:28:34.089677 /usr/lib/systemd/system-generators/torcx-generator[1140]: time="2025-11-01T01:28:34Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 01:28:35.651131 systemd[1]: Queued start job for default target multi-user.target. Nov 1 01:28:34.090119 /usr/lib/systemd/system-generators/torcx-generator[1140]: time="2025-11-01T01:28:34Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Nov 1 01:28:35.651137 systemd[1]: Unnecessary job was removed for dev-sda6.device. Nov 1 01:28:34.090132 /usr/lib/systemd/system-generators/torcx-generator[1140]: time="2025-11-01T01:28:34Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Nov 1 01:28:35.888594 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 1 01:28:34.090152 /usr/lib/systemd/system-generators/torcx-generator[1140]: time="2025-11-01T01:28:34Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Nov 1 01:28:34.090158 /usr/lib/systemd/system-generators/torcx-generator[1140]: time="2025-11-01T01:28:34Z" level=debug msg="skipped missing lower profile" missing profile=oem Nov 1 01:28:34.090176 /usr/lib/systemd/system-generators/torcx-generator[1140]: time="2025-11-01T01:28:34Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Nov 1 01:28:34.090184 /usr/lib/systemd/system-generators/torcx-generator[1140]: time="2025-11-01T01:28:34Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Nov 1 01:28:34.090311 /usr/lib/systemd/system-generators/torcx-generator[1140]: time="2025-11-01T01:28:34Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Nov 1 01:28:34.090335 /usr/lib/systemd/system-generators/torcx-generator[1140]: time="2025-11-01T01:28:34Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Nov 1 01:28:34.090343 /usr/lib/systemd/system-generators/torcx-generator[1140]: time="2025-11-01T01:28:34Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Nov 1 01:28:34.091557 /usr/lib/systemd/system-generators/torcx-generator[1140]: time="2025-11-01T01:28:34Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Nov 1 01:28:34.091578 /usr/lib/systemd/system-generators/torcx-generator[1140]: time="2025-11-01T01:28:34Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Nov 1 01:28:34.091590 /usr/lib/systemd/system-generators/torcx-generator[1140]: time="2025-11-01T01:28:34Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Nov 1 01:28:34.091599 /usr/lib/systemd/system-generators/torcx-generator[1140]: time="2025-11-01T01:28:34Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Nov 1 01:28:34.091609 /usr/lib/systemd/system-generators/torcx-generator[1140]: time="2025-11-01T01:28:34Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Nov 1 01:28:34.091617 /usr/lib/systemd/system-generators/torcx-generator[1140]: time="2025-11-01T01:28:34Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Nov 1 01:28:35.294046 /usr/lib/systemd/system-generators/torcx-generator[1140]: time="2025-11-01T01:28:35Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 01:28:35.294193 /usr/lib/systemd/system-generators/torcx-generator[1140]: time="2025-11-01T01:28:35Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 01:28:35.294250 /usr/lib/systemd/system-generators/torcx-generator[1140]: time="2025-11-01T01:28:35Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 01:28:35.294349 /usr/lib/systemd/system-generators/torcx-generator[1140]: time="2025-11-01T01:28:35Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 01:28:35.294378 /usr/lib/systemd/system-generators/torcx-generator[1140]: time="2025-11-01T01:28:35Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Nov 1 01:28:35.294420 /usr/lib/systemd/system-generators/torcx-generator[1140]: time="2025-11-01T01:28:35Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Nov 1 01:28:37.194582 systemd[1]: Starting systemd-network-generator.service... Nov 1 01:28:37.217595 systemd[1]: Starting systemd-remount-fs.service... Nov 1 01:28:37.240441 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 01:28:37.273990 systemd[1]: verity-setup.service: Deactivated successfully. Nov 1 01:28:37.274012 systemd[1]: Stopped verity-setup.service. Nov 1 01:28:37.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:37.308445 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:28:37.323590 systemd[1]: Started systemd-journald.service. Nov 1 01:28:37.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:37.330958 systemd[1]: Mounted dev-hugepages.mount. Nov 1 01:28:37.338674 systemd[1]: Mounted dev-mqueue.mount. Nov 1 01:28:37.345660 systemd[1]: Mounted media.mount. Nov 1 01:28:37.352667 systemd[1]: Mounted sys-kernel-debug.mount. Nov 1 01:28:37.362668 systemd[1]: Mounted sys-kernel-tracing.mount. Nov 1 01:28:37.370632 systemd[1]: Mounted tmp.mount. Nov 1 01:28:37.378731 systemd[1]: Finished flatcar-tmpfiles.service. Nov 1 01:28:37.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:37.386741 systemd[1]: Finished kmod-static-nodes.service. Nov 1 01:28:37.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:37.394778 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 01:28:37.394902 systemd[1]: Finished modprobe@configfs.service. Nov 1 01:28:37.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:37.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:37.403888 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 01:28:37.404055 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 01:28:37.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:37.411000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:37.413131 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 01:28:37.413378 systemd[1]: Finished modprobe@drm.service. Nov 1 01:28:37.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:37.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:37.422248 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 01:28:37.422574 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 01:28:37.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:37.429000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:37.431261 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 01:28:37.431599 systemd[1]: Finished modprobe@fuse.service. Nov 1 01:28:37.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:37.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:37.440237 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 01:28:37.440596 systemd[1]: Finished modprobe@loop.service. Nov 1 01:28:37.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:37.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:37.449364 systemd[1]: Finished systemd-modules-load.service. Nov 1 01:28:37.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:37.458225 systemd[1]: Finished systemd-network-generator.service. Nov 1 01:28:37.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:37.467217 systemd[1]: Finished systemd-remount-fs.service. Nov 1 01:28:37.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:37.476215 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 01:28:37.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:37.485952 systemd[1]: Reached target network-pre.target. Nov 1 01:28:37.497256 systemd[1]: Mounting sys-fs-fuse-connections.mount... Nov 1 01:28:37.508281 systemd[1]: Mounting sys-kernel-config.mount... Nov 1 01:28:37.516665 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 01:28:37.520038 systemd[1]: Starting systemd-hwdb-update.service... Nov 1 01:28:37.527142 systemd[1]: Starting systemd-journal-flush.service... Nov 1 01:28:37.530203 systemd-journald[1253]: Time spent on flushing to /var/log/journal/0bcd9dccb3d449b4988003fafbd00dc9 is 15.401ms for 1594 entries. Nov 1 01:28:37.530203 systemd-journald[1253]: System Journal (/var/log/journal/0bcd9dccb3d449b4988003fafbd00dc9) is 8.0M, max 195.6M, 187.6M free. Nov 1 01:28:37.564861 systemd-journald[1253]: Received client request to flush runtime journal. Nov 1 01:28:37.543523 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 01:28:37.544034 systemd[1]: Starting systemd-random-seed.service... Nov 1 01:28:37.554536 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 01:28:37.555055 systemd[1]: Starting systemd-sysctl.service... Nov 1 01:28:37.562025 systemd[1]: Starting systemd-sysusers.service... Nov 1 01:28:37.569017 systemd[1]: Starting systemd-udev-settle.service... Nov 1 01:28:37.576585 systemd[1]: Mounted sys-fs-fuse-connections.mount. Nov 1 01:28:37.584582 systemd[1]: Mounted sys-kernel-config.mount. Nov 1 01:28:37.592642 systemd[1]: Finished systemd-journal-flush.service. Nov 1 01:28:37.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:37.600637 systemd[1]: Finished systemd-random-seed.service. Nov 1 01:28:37.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:37.608648 systemd[1]: Finished systemd-sysctl.service. Nov 1 01:28:37.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:37.616625 systemd[1]: Finished systemd-sysusers.service. Nov 1 01:28:37.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:37.625589 systemd[1]: Reached target first-boot-complete.target. Nov 1 01:28:37.634157 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Nov 1 01:28:37.643437 udevadm[1269]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 1 01:28:37.652128 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Nov 1 01:28:37.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:37.837488 systemd[1]: Finished systemd-hwdb-update.service. Nov 1 01:28:37.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:37.845000 audit: BPF prog-id=24 op=LOAD Nov 1 01:28:37.845000 audit: BPF prog-id=25 op=LOAD Nov 1 01:28:37.845000 audit: BPF prog-id=7 op=UNLOAD Nov 1 01:28:37.845000 audit: BPF prog-id=8 op=UNLOAD Nov 1 01:28:37.847346 systemd[1]: Starting systemd-udevd.service... Nov 1 01:28:37.861805 systemd-udevd[1272]: Using default interface naming scheme 'v252'. Nov 1 01:28:37.880510 systemd[1]: Started systemd-udevd.service. Nov 1 01:28:37.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:37.890879 systemd[1]: Condition check resulted in dev-ttyS1.device being skipped. Nov 1 01:28:37.890000 audit: BPF prog-id=26 op=LOAD Nov 1 01:28:37.892163 systemd[1]: Starting systemd-networkd.service... Nov 1 01:28:37.911000 audit: BPF prog-id=27 op=LOAD Nov 1 01:28:37.913459 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Nov 1 01:28:37.913621 kernel: ACPI: button: Sleep Button [SLPB] Nov 1 01:28:37.912000 audit: BPF prog-id=28 op=LOAD Nov 1 01:28:37.925000 audit: BPF prog-id=29 op=LOAD Nov 1 01:28:37.927209 systemd[1]: Starting systemd-userdbd.service... Nov 1 01:28:37.927403 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 1 01:28:37.943454 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 01:28:37.958403 kernel: ACPI: button: Power Button [PWRF] Nov 1 01:28:37.972438 kernel: IPMI message handler: version 39.2 Nov 1 01:28:37.931000 audit[1341]: AVC avc: denied { confidentiality } for pid=1341 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Nov 1 01:28:37.931000 audit[1341]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=7f870ef23010 a1=4d9cc a2=7f8710be6bc5 a3=5 items=42 ppid=1272 pid=1341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:28:37.931000 audit: CWD cwd="/" Nov 1 01:28:37.931000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:28:37.931000 audit: PATH item=1 name=(null) inode=23158 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:28:37.931000 audit: PATH item=2 name=(null) inode=23158 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:28:37.931000 audit: PATH item=3 name=(null) inode=23159 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:28:37.931000 audit: PATH item=4 name=(null) inode=23158 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:28:37.931000 audit: PATH item=5 name=(null) inode=23160 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:28:37.931000 audit: PATH item=6 name=(null) inode=23158 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:28:37.931000 audit: PATH item=7 name=(null) inode=23161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:28:37.931000 audit: PATH item=8 name=(null) inode=23161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:28:37.931000 audit: PATH item=9 name=(null) inode=23162 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:28:37.931000 audit: PATH item=10 name=(null) inode=23161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:28:37.931000 audit: PATH item=11 name=(null) inode=23163 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:28:37.931000 audit: PATH item=12 name=(null) inode=23161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:28:37.931000 audit: PATH item=13 name=(null) inode=23164 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:28:37.931000 audit: PATH item=14 name=(null) inode=23161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:28:37.931000 audit: PATH item=15 name=(null) inode=23165 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:28:37.931000 audit: PATH item=16 name=(null) inode=23161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:28:37.931000 audit: PATH item=17 name=(null) inode=23166 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:28:37.931000 audit: PATH item=18 name=(null) inode=23158 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:28:37.931000 audit: PATH item=19 name=(null) inode=23167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:28:37.931000 audit: PATH item=20 name=(null) inode=23167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:28:37.931000 audit: PATH item=21 name=(null) inode=23168 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:28:37.931000 audit: PATH item=22 name=(null) inode=23167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:28:37.931000 audit: PATH item=23 name=(null) inode=23169 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:28:37.931000 audit: PATH item=24 name=(null) inode=23167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:28:37.931000 audit: PATH item=25 name=(null) inode=23170 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:28:37.931000 audit: PATH item=26 name=(null) inode=23167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:28:37.931000 audit: PATH item=27 name=(null) inode=23171 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:28:37.931000 audit: PATH item=28 name=(null) inode=23167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:28:37.931000 audit: PATH item=29 name=(null) inode=23172 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:28:37.931000 audit: PATH item=30 name=(null) inode=23158 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:28:37.931000 audit: PATH item=31 name=(null) inode=23173 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:28:37.931000 audit: PATH item=32 name=(null) inode=23173 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:28:37.931000 audit: PATH item=33 name=(null) inode=23174 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:28:37.931000 audit: PATH item=34 name=(null) inode=23173 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:28:37.931000 audit: PATH item=35 name=(null) inode=23175 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:28:37.931000 audit: PATH item=36 name=(null) inode=23173 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:28:37.931000 audit: PATH item=37 name=(null) inode=23176 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:28:37.931000 audit: PATH item=38 name=(null) inode=23173 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:28:37.931000 audit: PATH item=39 name=(null) inode=23177 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:28:37.931000 audit: PATH item=40 name=(null) inode=23173 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:28:37.931000 audit: PATH item=41 name=(null) inode=23178 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:28:37.931000 audit: PROCTITLE proctitle="(udev-worker)" Nov 1 01:28:38.034619 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Nov 1 01:28:38.068688 kernel: ipmi device interface Nov 1 01:28:38.068714 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Nov 1 01:28:38.068829 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Nov 1 01:28:38.051971 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 01:28:38.076970 systemd[1]: Started systemd-userdbd.service. Nov 1 01:28:38.093458 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Nov 1 01:28:38.093567 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Nov 1 01:28:38.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:38.150571 kernel: ipmi_si: IPMI System Interface driver Nov 1 01:28:38.150601 kernel: iTCO_vendor_support: vendor-support=0 Nov 1 01:28:38.150619 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Nov 1 01:28:38.202445 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Nov 1 01:28:38.202461 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Nov 1 01:28:38.202474 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Nov 1 01:28:38.290802 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Nov 1 01:28:38.290900 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Nov 1 01:28:38.290977 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Nov 1 01:28:38.307099 kernel: ipmi_si: Adding ACPI-specified kcs state machine Nov 1 01:28:38.307114 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Nov 1 01:28:38.307190 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Nov 1 01:28:38.357163 systemd-networkd[1315]: bond0: netdev ready Nov 1 01:28:38.360334 systemd-networkd[1315]: lo: Link UP Nov 1 01:28:38.360337 systemd-networkd[1315]: lo: Gained carrier Nov 1 01:28:38.360918 systemd-networkd[1315]: Enumeration completed Nov 1 01:28:38.360993 systemd[1]: Started systemd-networkd.service. Nov 1 01:28:38.361333 systemd-networkd[1315]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Nov 1 01:28:38.367417 kernel: intel_rapl_common: Found RAPL domain package Nov 1 01:28:38.367453 kernel: intel_rapl_common: Found RAPL domain core Nov 1 01:28:38.367471 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Nov 1 01:28:38.367568 kernel: intel_rapl_common: Found RAPL domain dram Nov 1 01:28:38.368599 systemd-networkd[1315]: enp1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:15:b6:dd.network. Nov 1 01:28:38.417400 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Nov 1 01:28:38.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:38.522402 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Nov 1 01:28:38.541438 kernel: ipmi_ssif: IPMI SSIF Interface driver Nov 1 01:28:38.544716 systemd[1]: Finished systemd-udev-settle.service. Nov 1 01:28:38.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:38.553730 systemd[1]: Starting lvm2-activation-early.service... Nov 1 01:28:38.580688 lvm[1375]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 01:28:38.640642 systemd[1]: Finished lvm2-activation-early.service. Nov 1 01:28:38.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:38.648881 systemd[1]: Reached target cryptsetup.target. Nov 1 01:28:38.660375 systemd[1]: Starting lvm2-activation.service... Nov 1 01:28:38.671681 lvm[1376]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 01:28:38.716210 systemd[1]: Finished lvm2-activation.service. Nov 1 01:28:38.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:38.724602 systemd[1]: Reached target local-fs-pre.target. Nov 1 01:28:38.732514 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 01:28:38.732542 systemd[1]: Reached target local-fs.target. Nov 1 01:28:38.740515 systemd[1]: Reached target machines.target. Nov 1 01:28:38.749586 systemd[1]: Starting ldconfig.service... Nov 1 01:28:38.757321 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 01:28:38.757364 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 01:28:38.758539 systemd[1]: Starting systemd-boot-update.service... Nov 1 01:28:38.766601 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Nov 1 01:28:38.777861 systemd[1]: Starting systemd-machine-id-commit.service... Nov 1 01:28:38.779430 systemd[1]: Starting systemd-sysext.service... Nov 1 01:28:38.779942 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1378 (bootctl) Nov 1 01:28:38.781222 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Nov 1 01:28:38.793016 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Nov 1 01:28:38.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:38.795036 systemd[1]: Unmounting usr-share-oem.mount... Nov 1 01:28:38.812200 systemd[1]: usr-share-oem.mount: Deactivated successfully. Nov 1 01:28:38.812382 systemd[1]: Unmounted usr-share-oem.mount. Nov 1 01:28:38.856401 kernel: loop0: detected capacity change from 0 to 219144 Nov 1 01:28:38.921407 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Nov 1 01:28:38.944410 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Nov 1 01:28:38.945592 systemd-networkd[1315]: enp1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:15:b6:dc.network. Nov 1 01:28:38.945952 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 01:28:38.946520 systemd[1]: Finished systemd-machine-id-commit.service. Nov 1 01:28:38.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:38.966422 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 01:28:38.996569 systemd-fsck[1389]: fsck.fat 4.2 (2021-01-31) Nov 1 01:28:38.996569 systemd-fsck[1389]: /dev/sda1: 790 files, 120773/258078 clusters Nov 1 01:28:38.997393 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Nov 1 01:28:39.008564 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Nov 1 01:28:39.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:39.019348 systemd[1]: Mounting boot.mount... Nov 1 01:28:39.035437 kernel: loop1: detected capacity change from 0 to 219144 Nov 1 01:28:39.041218 systemd[1]: Mounted boot.mount. Nov 1 01:28:39.049507 (sd-sysext)[1391]: Using extensions 'kubernetes'. Nov 1 01:28:39.049698 (sd-sysext)[1391]: Merged extensions into '/usr'. Nov 1 01:28:39.060991 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:28:39.061797 systemd[1]: Mounting usr-share-oem.mount... Nov 1 01:28:39.069582 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 01:28:39.070219 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 01:28:39.078083 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 01:28:39.086195 systemd[1]: Starting modprobe@loop.service... Nov 1 01:28:39.092539 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 01:28:39.092638 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 01:28:39.092735 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:28:39.095183 systemd[1]: Finished systemd-boot-update.service. Nov 1 01:28:39.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:39.109755 systemd[1]: Mounted usr-share-oem.mount. Nov 1 01:28:39.115442 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Nov 1 01:28:39.115628 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Nov 1 01:28:39.135407 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Nov 1 01:28:39.150777 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 01:28:39.150868 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 01:28:39.155404 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Nov 1 01:28:39.156826 systemd-networkd[1315]: bond0: Link UP Nov 1 01:28:39.157083 systemd-networkd[1315]: enp1s0f1np1: Link UP Nov 1 01:28:39.157258 systemd-networkd[1315]: enp1s0f1np1: Gained carrier Nov 1 01:28:39.158534 systemd-networkd[1315]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:15:b6:dc.network. Nov 1 01:28:39.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:39.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:39.189682 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 01:28:39.189745 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 01:28:39.194436 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 25000 Mbps full duplex Nov 1 01:28:39.194461 kernel: bond0: active interface up! Nov 1 01:28:39.194476 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 25000 Mbps full duplex Nov 1 01:28:39.203790 ldconfig[1377]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 01:28:39.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:39.224000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:39.225714 systemd[1]: Finished ldconfig.service. Nov 1 01:28:39.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:39.237663 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 01:28:39.237726 systemd[1]: Finished modprobe@loop.service. Nov 1 01:28:39.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:39.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:39.252775 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 01:28:39.252819 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 01:28:39.253330 systemd[1]: Finished systemd-sysext.service. Nov 1 01:28:39.265457 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Nov 1 01:28:39.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:39.273115 systemd[1]: Starting ensure-sysext.service... Nov 1 01:28:39.280010 systemd[1]: Starting systemd-tmpfiles-setup.service... Nov 1 01:28:39.285932 systemd-tmpfiles[1400]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Nov 1 01:28:39.287065 systemd-tmpfiles[1400]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 01:28:39.288115 systemd-tmpfiles[1400]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 01:28:39.289755 systemd[1]: Reloading. Nov 1 01:28:39.315476 /usr/lib/systemd/system-generators/torcx-generator[1419]: time="2025-11-01T01:28:39Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 01:28:39.315491 /usr/lib/systemd/system-generators/torcx-generator[1419]: time="2025-11-01T01:28:39Z" level=info msg="torcx already run" Nov 1 01:28:39.320768 systemd-networkd[1315]: enp1s0f0np0: Link UP Nov 1 01:28:39.320948 systemd-networkd[1315]: bond0: Gained carrier Nov 1 01:28:39.321044 systemd-networkd[1315]: enp1s0f0np0: Gained carrier Nov 1 01:28:39.329805 systemd-networkd[1315]: enp1s0f1np1: Link DOWN Nov 1 01:28:39.329808 systemd-networkd[1315]: enp1s0f1np1: Lost carrier Nov 1 01:28:39.352419 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Nov 1 01:28:39.352479 kernel: bond0: (slave enp1s0f1np1): invalid new link 1 on slave Nov 1 01:28:39.391119 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 01:28:39.391127 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 01:28:39.402162 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 01:28:39.442000 audit: BPF prog-id=30 op=LOAD Nov 1 01:28:39.442000 audit: BPF prog-id=31 op=LOAD Nov 1 01:28:39.443000 audit: BPF prog-id=24 op=UNLOAD Nov 1 01:28:39.443000 audit: BPF prog-id=25 op=UNLOAD Nov 1 01:28:39.443000 audit: BPF prog-id=32 op=LOAD Nov 1 01:28:39.443000 audit: BPF prog-id=27 op=UNLOAD Nov 1 01:28:39.443000 audit: BPF prog-id=33 op=LOAD Nov 1 01:28:39.444000 audit: BPF prog-id=34 op=LOAD Nov 1 01:28:39.444000 audit: BPF prog-id=28 op=UNLOAD Nov 1 01:28:39.444000 audit: BPF prog-id=29 op=UNLOAD Nov 1 01:28:39.444000 audit: BPF prog-id=35 op=LOAD Nov 1 01:28:39.444000 audit: BPF prog-id=21 op=UNLOAD Nov 1 01:28:39.444000 audit: BPF prog-id=36 op=LOAD Nov 1 01:28:39.444000 audit: BPF prog-id=37 op=LOAD Nov 1 01:28:39.444000 audit: BPF prog-id=22 op=UNLOAD Nov 1 01:28:39.444000 audit: BPF prog-id=23 op=UNLOAD Nov 1 01:28:39.445000 audit: BPF prog-id=38 op=LOAD Nov 1 01:28:39.445000 audit: BPF prog-id=26 op=UNLOAD Nov 1 01:28:39.447892 systemd[1]: Finished systemd-tmpfiles-setup.service. Nov 1 01:28:39.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:39.458307 systemd[1]: Starting audit-rules.service... Nov 1 01:28:39.466050 systemd[1]: Starting clean-ca-certificates.service... Nov 1 01:28:39.475111 systemd[1]: Starting systemd-journal-catalog-update.service... Nov 1 01:28:39.474000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Nov 1 01:28:39.474000 audit[1497]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe899a40a0 a2=420 a3=0 items=0 ppid=1481 pid=1497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:28:39.474000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Nov 1 01:28:39.476031 augenrules[1497]: No rules Nov 1 01:28:39.484464 systemd[1]: Starting systemd-resolved.service... Nov 1 01:28:39.492465 systemd[1]: Starting systemd-timesyncd.service... Nov 1 01:28:39.500014 systemd[1]: Starting systemd-update-utmp.service... Nov 1 01:28:39.512758 systemd[1]: Finished audit-rules.service. Nov 1 01:28:39.518447 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Nov 1 01:28:39.531610 systemd[1]: Finished clean-ca-certificates.service. Nov 1 01:28:39.540439 kernel: bond0: (slave enp1s0f1np1): speed changed to 0 on port 1 Nov 1 01:28:39.541590 systemd-networkd[1315]: enp1s0f1np1: Link UP Nov 1 01:28:39.541775 systemd-networkd[1315]: enp1s0f1np1: Gained carrier Nov 1 01:28:39.548559 systemd[1]: Finished systemd-journal-catalog-update.service. Nov 1 01:28:39.561712 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 01:28:39.562350 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 01:28:39.576756 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 01:28:39.582439 kernel: bond0: (slave enp1s0f1np1): link status up again after 200 ms Nov 1 01:28:39.598772 systemd[1]: Starting modprobe@loop.service... Nov 1 01:28:39.603442 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 25000 Mbps full duplex Nov 1 01:28:39.609456 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 01:28:39.609522 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 01:28:39.610178 systemd[1]: Starting systemd-update-done.service... Nov 1 01:28:39.616516 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 01:28:39.617142 systemd[1]: Finished systemd-update-utmp.service. Nov 1 01:28:39.625692 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 01:28:39.625783 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 01:28:39.628981 systemd-resolved[1503]: Positive Trust Anchors: Nov 1 01:28:39.628986 systemd-resolved[1503]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 01:28:39.629005 systemd-resolved[1503]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 01:28:39.632975 systemd-resolved[1503]: Using system hostname 'ci-3510.3.8-n-34cd8b9336'. Nov 1 01:28:39.633647 systemd[1]: Started systemd-timesyncd.service. Nov 1 01:28:39.641700 systemd[1]: Started systemd-resolved.service. Nov 1 01:28:39.649676 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 01:28:39.649741 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 01:28:39.657695 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 01:28:39.657759 systemd[1]: Finished modprobe@loop.service. Nov 1 01:28:39.665683 systemd[1]: Finished systemd-update-done.service. Nov 1 01:28:39.674697 systemd[1]: Reached target network.target. Nov 1 01:28:39.682541 systemd[1]: Reached target nss-lookup.target. Nov 1 01:28:39.690543 systemd[1]: Reached target time-set.target. Nov 1 01:28:39.698638 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 01:28:39.699319 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 01:28:39.707049 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 01:28:39.714020 systemd[1]: Starting modprobe@loop.service... Nov 1 01:28:39.720523 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 01:28:39.720590 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 01:28:39.720653 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 01:28:39.721178 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 01:28:39.721244 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 01:28:39.729696 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 01:28:39.729757 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 01:28:39.737683 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 01:28:39.737743 systemd[1]: Finished modprobe@loop.service. Nov 1 01:28:39.745703 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 01:28:39.745777 systemd[1]: Reached target sysinit.target. Nov 1 01:28:39.753576 systemd[1]: Started motdgen.path. Nov 1 01:28:39.760559 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Nov 1 01:28:39.770617 systemd[1]: Started logrotate.timer. Nov 1 01:28:39.777582 systemd[1]: Started mdadm.timer. Nov 1 01:28:39.784546 systemd[1]: Started systemd-tmpfiles-clean.timer. Nov 1 01:28:39.792518 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 01:28:39.792579 systemd[1]: Reached target paths.target. Nov 1 01:28:39.799533 systemd[1]: Reached target timers.target. Nov 1 01:28:39.806691 systemd[1]: Listening on dbus.socket. Nov 1 01:28:39.817839 systemd[1]: Starting docker.socket... Nov 1 01:28:39.825893 systemd[1]: Listening on sshd.socket. Nov 1 01:28:39.832517 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 01:28:39.832587 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 01:28:39.834214 systemd[1]: Listening on docker.socket. Nov 1 01:28:39.842307 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 01:28:39.842369 systemd[1]: Reached target sockets.target. Nov 1 01:28:39.850513 systemd[1]: Reached target basic.target. Nov 1 01:28:39.857456 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:28:39.857502 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 01:28:39.857551 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 01:28:39.858095 systemd[1]: Starting containerd.service... Nov 1 01:28:39.864915 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Nov 1 01:28:39.874075 systemd[1]: Starting coreos-metadata.service... Nov 1 01:28:39.881050 systemd[1]: Starting dbus.service... Nov 1 01:28:39.888234 systemd[1]: Starting enable-oem-cloudinit.service... Nov 1 01:28:39.892456 jq[1524]: false Nov 1 01:28:39.897002 coreos-metadata[1517]: Nov 01 01:28:39.896 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 1 01:28:39.897242 systemd[1]: Starting extend-filesystems.service... Nov 1 01:28:39.899574 dbus-daemon[1523]: [system] SELinux support is enabled Nov 1 01:28:39.904511 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Nov 1 01:28:39.905323 extend-filesystems[1525]: Found loop1 Nov 1 01:28:39.912525 extend-filesystems[1525]: Found sda Nov 1 01:28:39.912525 extend-filesystems[1525]: Found sda1 Nov 1 01:28:39.912525 extend-filesystems[1525]: Found sda2 Nov 1 01:28:39.912525 extend-filesystems[1525]: Found sda3 Nov 1 01:28:39.912525 extend-filesystems[1525]: Found usr Nov 1 01:28:39.912525 extend-filesystems[1525]: Found sda4 Nov 1 01:28:39.912525 extend-filesystems[1525]: Found sda6 Nov 1 01:28:39.912525 extend-filesystems[1525]: Found sda7 Nov 1 01:28:39.912525 extend-filesystems[1525]: Found sda9 Nov 1 01:28:39.912525 extend-filesystems[1525]: Checking size of /dev/sda9 Nov 1 01:28:39.912525 extend-filesystems[1525]: Resized partition /dev/sda9 Nov 1 01:28:40.035446 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Nov 1 01:28:40.035490 coreos-metadata[1520]: Nov 01 01:28:39.906 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 1 01:28:39.905575 systemd[1]: Starting modprobe@drm.service... Nov 1 01:28:40.035639 extend-filesystems[1535]: resize2fs 1.46.5 (30-Dec-2021) Nov 1 01:28:39.927399 systemd[1]: Starting motdgen.service... Nov 1 01:28:39.945511 systemd[1]: Starting prepare-helm.service... Nov 1 01:28:39.964169 systemd[1]: Starting ssh-key-proc-cmdline.service... Nov 1 01:28:39.992114 systemd[1]: Starting sshd-keygen.service... Nov 1 01:28:40.027101 systemd[1]: Starting systemd-networkd-wait-online.service... Nov 1 01:28:40.043479 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 01:28:40.044262 systemd[1]: Starting tcsd.service... Nov 1 01:28:40.057703 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 1 01:28:40.058107 systemd[1]: Starting update-engine.service... Nov 1 01:28:40.065045 systemd[1]: Starting update-ssh-keys-after-ignition.service... Nov 1 01:28:40.066492 jq[1556]: true Nov 1 01:28:40.073479 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:28:40.074605 systemd[1]: Started dbus.service. Nov 1 01:28:40.083363 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 01:28:40.083461 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Nov 1 01:28:40.083705 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 01:28:40.083773 systemd[1]: Finished modprobe@drm.service. Nov 1 01:28:40.091685 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 01:28:40.091767 systemd[1]: Finished motdgen.service. Nov 1 01:28:40.099085 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 01:28:40.099173 systemd[1]: Finished ssh-key-proc-cmdline.service. Nov 1 01:28:40.102395 update_engine[1555]: I1101 01:28:40.101162 1555 main.cc:92] Flatcar Update Engine starting Nov 1 01:28:40.105313 update_engine[1555]: I1101 01:28:40.105278 1555 update_check_scheduler.cc:74] Next update check in 7m26s Nov 1 01:28:40.110074 jq[1560]: true Nov 1 01:28:40.110915 systemd[1]: Finished ensure-sysext.service. Nov 1 01:28:40.119180 env[1561]: time="2025-11-01T01:28:40.119150798Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Nov 1 01:28:40.125654 tar[1558]: linux-amd64/LICENSE Nov 1 01:28:40.125842 tar[1558]: linux-amd64/helm Nov 1 01:28:40.126775 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Nov 1 01:28:40.126861 systemd[1]: Condition check resulted in tcsd.service being skipped. Nov 1 01:28:40.127595 env[1561]: time="2025-11-01T01:28:40.127544364Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 01:28:40.127637 env[1561]: time="2025-11-01T01:28:40.127625072Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 01:28:40.128233 env[1561]: time="2025-11-01T01:28:40.128189639Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 01:28:40.128233 env[1561]: time="2025-11-01T01:28:40.128204058Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 01:28:40.128332 env[1561]: time="2025-11-01T01:28:40.128322044Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 01:28:40.128358 env[1561]: time="2025-11-01T01:28:40.128332719Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 01:28:40.128358 env[1561]: time="2025-11-01T01:28:40.128340421Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Nov 1 01:28:40.128358 env[1561]: time="2025-11-01T01:28:40.128345908Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 01:28:40.128408 env[1561]: time="2025-11-01T01:28:40.128388009Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 01:28:40.128591 env[1561]: time="2025-11-01T01:28:40.128556553Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 01:28:40.128666 env[1561]: time="2025-11-01T01:28:40.128625894Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 01:28:40.128666 env[1561]: time="2025-11-01T01:28:40.128635684Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 01:28:40.128666 env[1561]: time="2025-11-01T01:28:40.128662248Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Nov 1 01:28:40.128723 env[1561]: time="2025-11-01T01:28:40.128670129Z" level=info msg="metadata content store policy set" policy=shared Nov 1 01:28:40.131446 systemd[1]: Started update-engine.service. Nov 1 01:28:40.140689 systemd[1]: Started locksmithd.service. Nov 1 01:28:40.146514 env[1561]: time="2025-11-01T01:28:40.146496854Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 01:28:40.146546 env[1561]: time="2025-11-01T01:28:40.146519776Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 01:28:40.146546 env[1561]: time="2025-11-01T01:28:40.146528000Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 01:28:40.149387 env[1561]: time="2025-11-01T01:28:40.146550222Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 01:28:40.149387 env[1561]: time="2025-11-01T01:28:40.146559423Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 01:28:40.149387 env[1561]: time="2025-11-01T01:28:40.146567442Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 01:28:40.149387 env[1561]: time="2025-11-01T01:28:40.146574489Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 01:28:40.149387 env[1561]: time="2025-11-01T01:28:40.146582038Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 01:28:40.149387 env[1561]: time="2025-11-01T01:28:40.146589035Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Nov 1 01:28:40.149387 env[1561]: time="2025-11-01T01:28:40.146596759Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 01:28:40.149387 env[1561]: time="2025-11-01T01:28:40.146604312Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 01:28:40.149387 env[1561]: time="2025-11-01T01:28:40.146611136Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 01:28:40.148510 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 01:28:40.149566 env[1561]: time="2025-11-01T01:28:40.149413560Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 01:28:40.149566 env[1561]: time="2025-11-01T01:28:40.149466545Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 01:28:40.148525 systemd[1]: Reached target system-config.target. Nov 1 01:28:40.149627 env[1561]: time="2025-11-01T01:28:40.149597885Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 01:28:40.149627 env[1561]: time="2025-11-01T01:28:40.149612712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 01:28:40.149627 env[1561]: time="2025-11-01T01:28:40.149620248Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 01:28:40.149673 env[1561]: time="2025-11-01T01:28:40.149649613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 01:28:40.149673 env[1561]: time="2025-11-01T01:28:40.149657403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 01:28:40.149673 env[1561]: time="2025-11-01T01:28:40.149664492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 01:28:40.149673 env[1561]: time="2025-11-01T01:28:40.149670552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 01:28:40.149736 env[1561]: time="2025-11-01T01:28:40.149676904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 01:28:40.149736 env[1561]: time="2025-11-01T01:28:40.149683785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 01:28:40.149736 env[1561]: time="2025-11-01T01:28:40.149690500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 01:28:40.149736 env[1561]: time="2025-11-01T01:28:40.149696915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 01:28:40.149736 env[1561]: time="2025-11-01T01:28:40.149704533Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 01:28:40.149815 env[1561]: time="2025-11-01T01:28:40.149769648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 01:28:40.149815 env[1561]: time="2025-11-01T01:28:40.149782302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 01:28:40.149815 env[1561]: time="2025-11-01T01:28:40.149789880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 01:28:40.149815 env[1561]: time="2025-11-01T01:28:40.149796369Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 01:28:40.149815 env[1561]: time="2025-11-01T01:28:40.149804405Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Nov 1 01:28:40.149815 env[1561]: time="2025-11-01T01:28:40.149811806Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 01:28:40.149924 env[1561]: time="2025-11-01T01:28:40.149821492Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Nov 1 01:28:40.149924 env[1561]: time="2025-11-01T01:28:40.149844087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 01:28:40.149991 env[1561]: time="2025-11-01T01:28:40.149964073Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 01:28:40.151956 env[1561]: time="2025-11-01T01:28:40.149998188Z" level=info msg="Connect containerd service" Nov 1 01:28:40.151956 env[1561]: time="2025-11-01T01:28:40.150018860Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 01:28:40.153563 env[1561]: time="2025-11-01T01:28:40.153524541Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 01:28:40.153798 env[1561]: time="2025-11-01T01:28:40.153781742Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 01:28:40.153846 env[1561]: time="2025-11-01T01:28:40.153817149Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 01:28:40.153846 env[1561]: time="2025-11-01T01:28:40.153620094Z" level=info msg="Start subscribing containerd event" Nov 1 01:28:40.153904 env[1561]: time="2025-11-01T01:28:40.153875851Z" level=info msg="Start recovering state" Nov 1 01:28:40.153943 env[1561]: time="2025-11-01T01:28:40.153933078Z" level=info msg="Start event monitor" Nov 1 01:28:40.153972 env[1561]: time="2025-11-01T01:28:40.153938229Z" level=info msg="containerd successfully booted in 0.035119s" Nov 1 01:28:40.153972 env[1561]: time="2025-11-01T01:28:40.153945939Z" level=info msg="Start snapshots syncer" Nov 1 01:28:40.153972 env[1561]: time="2025-11-01T01:28:40.153957542Z" level=info msg="Start cni network conf syncer for default" Nov 1 01:28:40.153972 env[1561]: time="2025-11-01T01:28:40.153962777Z" level=info msg="Start streaming server" Nov 1 01:28:40.161774 systemd[1]: Starting systemd-logind.service... Nov 1 01:28:40.162300 bash[1592]: Updated "/home/core/.ssh/authorized_keys" Nov 1 01:28:40.168464 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 01:28:40.168493 systemd[1]: Reached target user-config.target. Nov 1 01:28:40.176564 systemd[1]: Started containerd.service. Nov 1 01:28:40.183657 systemd[1]: Finished update-ssh-keys-after-ignition.service. Nov 1 01:28:40.183700 systemd-logind[1596]: Watching system buttons on /dev/input/event3 (Power Button) Nov 1 01:28:40.183714 systemd-logind[1596]: Watching system buttons on /dev/input/event2 (Sleep Button) Nov 1 01:28:40.183726 systemd-logind[1596]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Nov 1 01:28:40.183881 systemd-logind[1596]: New seat seat0. Nov 1 01:28:40.193661 systemd[1]: Started systemd-logind.service. Nov 1 01:28:40.202273 locksmithd[1594]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 01:28:40.406267 tar[1558]: linux-amd64/README.md Nov 1 01:28:40.408864 systemd[1]: Finished prepare-helm.service. Nov 1 01:28:40.437445 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Nov 1 01:28:40.465247 extend-filesystems[1535]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 1 01:28:40.465247 extend-filesystems[1535]: old_desc_blocks = 1, new_desc_blocks = 56 Nov 1 01:28:40.465247 extend-filesystems[1535]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Nov 1 01:28:40.502528 extend-filesystems[1525]: Resized filesystem in /dev/sda9 Nov 1 01:28:40.502528 extend-filesystems[1525]: Found sdb Nov 1 01:28:40.465692 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 01:28:40.465773 systemd[1]: Finished extend-filesystems.service. Nov 1 01:28:40.546852 sshd_keygen[1552]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 01:28:40.558619 systemd[1]: Finished sshd-keygen.service. Nov 1 01:28:40.562527 systemd-networkd[1315]: bond0: Gained IPv6LL Nov 1 01:28:40.567455 systemd[1]: Starting issuegen.service... Nov 1 01:28:40.574707 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 01:28:40.574782 systemd[1]: Finished issuegen.service. Nov 1 01:28:40.582269 systemd[1]: Starting systemd-user-sessions.service... Nov 1 01:28:40.590702 systemd[1]: Finished systemd-user-sessions.service. Nov 1 01:28:40.600137 systemd[1]: Started getty@tty1.service. Nov 1 01:28:40.607118 systemd[1]: Started serial-getty@ttyS1.service. Nov 1 01:28:40.615608 systemd[1]: Reached target getty.target. Nov 1 01:28:40.883408 systemd[1]: Finished systemd-networkd-wait-online.service. Nov 1 01:28:40.892702 systemd[1]: Reached target network-online.target. Nov 1 01:28:40.901319 systemd[1]: Starting kubelet.service... Nov 1 01:28:41.579760 systemd[1]: Started kubelet.service. Nov 1 01:28:41.773483 kernel: mlx5_core 0000:01:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Nov 1 01:28:41.994324 kubelet[1623]: E1101 01:28:41.994277 1623 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 01:28:41.995336 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 01:28:41.995447 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 01:28:45.654537 login[1617]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Nov 1 01:28:45.655946 login[1618]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 1 01:28:45.663307 systemd-logind[1596]: New session 1 of user core. Nov 1 01:28:45.664043 systemd[1]: Created slice user-500.slice. Nov 1 01:28:45.664645 systemd[1]: Starting user-runtime-dir@500.service... Nov 1 01:28:45.670221 systemd[1]: Finished user-runtime-dir@500.service. Nov 1 01:28:45.670952 systemd[1]: Starting user@500.service... Nov 1 01:28:45.672893 (systemd)[1646]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:28:45.745430 systemd[1646]: Queued start job for default target default.target. Nov 1 01:28:45.745684 systemd[1646]: Reached target paths.target. Nov 1 01:28:45.745695 systemd[1646]: Reached target sockets.target. Nov 1 01:28:45.745703 systemd[1646]: Reached target timers.target. Nov 1 01:28:45.745710 systemd[1646]: Reached target basic.target. Nov 1 01:28:45.745731 systemd[1646]: Reached target default.target. Nov 1 01:28:45.745746 systemd[1646]: Startup finished in 69ms. Nov 1 01:28:45.745801 systemd[1]: Started user@500.service. Nov 1 01:28:45.746451 systemd[1]: Started session-1.scope. Nov 1 01:28:45.811888 coreos-metadata[1520]: Nov 01 01:28:45.811 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Nov 1 01:28:45.812695 coreos-metadata[1517]: Nov 01 01:28:45.811 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Nov 1 01:28:46.655275 login[1617]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 1 01:28:46.666632 systemd-logind[1596]: New session 2 of user core. Nov 1 01:28:46.670132 systemd[1]: Started session-2.scope. Nov 1 01:28:46.812387 coreos-metadata[1517]: Nov 01 01:28:46.812 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Nov 1 01:28:46.812663 coreos-metadata[1520]: Nov 01 01:28:46.812 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Nov 1 01:28:47.305466 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:2 port 2:2 Nov 1 01:28:47.312443 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:1 port 2:2 Nov 1 01:28:47.867454 systemd[1]: Created slice system-sshd.slice. Nov 1 01:28:47.868057 systemd[1]: Started sshd@0-139.178.94.15:22-147.75.109.163:53612.service. Nov 1 01:28:47.914783 sshd[1667]: Accepted publickey for core from 147.75.109.163 port 53612 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 01:28:47.916025 sshd[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:28:47.920713 systemd-logind[1596]: New session 3 of user core. Nov 1 01:28:47.922178 systemd[1]: Started session-3.scope. Nov 1 01:28:47.981132 systemd[1]: Started sshd@1-139.178.94.15:22-147.75.109.163:53616.service. Nov 1 01:28:48.013602 systemd-timesyncd[1504]: Contacted time server 205.233.73.201:123 (0.flatcar.pool.ntp.org). Nov 1 01:28:48.013636 systemd-timesyncd[1504]: Initial clock synchronization to Sat 2025-11-01 01:28:48.097502 UTC. Nov 1 01:28:48.015740 sshd[1672]: Accepted publickey for core from 147.75.109.163 port 53616 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 01:28:48.016515 sshd[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:28:48.018735 systemd-logind[1596]: New session 4 of user core. Nov 1 01:28:48.019291 systemd[1]: Started session-4.scope. Nov 1 01:28:48.070076 sshd[1672]: pam_unix(sshd:session): session closed for user core Nov 1 01:28:48.071973 systemd[1]: sshd@1-139.178.94.15:22-147.75.109.163:53616.service: Deactivated successfully. Nov 1 01:28:48.072315 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 01:28:48.072670 systemd-logind[1596]: Session 4 logged out. Waiting for processes to exit. Nov 1 01:28:48.073310 systemd[1]: Started sshd@2-139.178.94.15:22-147.75.109.163:53618.service. Nov 1 01:28:48.073767 systemd-logind[1596]: Removed session 4. Nov 1 01:28:48.109236 sshd[1678]: Accepted publickey for core from 147.75.109.163 port 53618 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 01:28:48.110507 sshd[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:28:48.114685 systemd-logind[1596]: New session 5 of user core. Nov 1 01:28:48.115935 systemd[1]: Started session-5.scope. Nov 1 01:28:48.185248 sshd[1678]: pam_unix(sshd:session): session closed for user core Nov 1 01:28:48.191166 systemd[1]: sshd@2-139.178.94.15:22-147.75.109.163:53618.service: Deactivated successfully. Nov 1 01:28:48.193003 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 01:28:48.194841 systemd-logind[1596]: Session 5 logged out. Waiting for processes to exit. Nov 1 01:28:48.197127 systemd-logind[1596]: Removed session 5. Nov 1 01:28:48.966797 coreos-metadata[1517]: Nov 01 01:28:48.966 INFO Fetch successful Nov 1 01:28:49.003964 unknown[1517]: wrote ssh authorized keys file for user: core Nov 1 01:28:49.016209 update-ssh-keys[1684]: Updated "/home/core/.ssh/authorized_keys" Nov 1 01:28:49.016450 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Nov 1 01:28:49.023106 coreos-metadata[1520]: Nov 01 01:28:49.023 INFO Fetch successful Nov 1 01:28:49.059714 systemd[1]: Finished coreos-metadata.service. Nov 1 01:28:49.060620 systemd[1]: Started packet-phone-home.service. Nov 1 01:28:49.060812 systemd[1]: Reached target multi-user.target. Nov 1 01:28:49.061558 systemd[1]: Starting systemd-update-utmp-runlevel.service... Nov 1 01:28:49.065967 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Nov 1 01:28:49.066085 systemd[1]: Finished systemd-update-utmp-runlevel.service. Nov 1 01:28:49.066147 curl[1687]: % Total % Received % Xferd Average Speed Time Time Time Current Nov 1 01:28:49.066147 curl[1687]: Dload Upload Total Spent Left Speed Nov 1 01:28:49.066253 systemd[1]: Startup finished in 1.862s (kernel) + 25.572s (initrd) + 15.657s (userspace) = 43.091s. Nov 1 01:28:49.500063 curl[1687]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Nov 1 01:28:49.502628 systemd[1]: packet-phone-home.service: Deactivated successfully. Nov 1 01:28:52.196641 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 01:28:52.197513 systemd[1]: Stopped kubelet.service. Nov 1 01:28:52.200491 systemd[1]: Starting kubelet.service... Nov 1 01:28:52.438001 systemd[1]: Started kubelet.service. Nov 1 01:28:52.463443 kubelet[1693]: E1101 01:28:52.463374 1693 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 01:28:52.465262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 01:28:52.465358 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 01:28:58.253067 systemd[1]: Started sshd@3-139.178.94.15:22-147.75.109.163:50068.service. Nov 1 01:28:58.288762 sshd[1710]: Accepted publickey for core from 147.75.109.163 port 50068 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 01:28:58.289440 sshd[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:28:58.291947 systemd-logind[1596]: New session 6 of user core. Nov 1 01:28:58.292367 systemd[1]: Started session-6.scope. Nov 1 01:28:58.344305 sshd[1710]: pam_unix(sshd:session): session closed for user core Nov 1 01:28:58.345953 systemd[1]: sshd@3-139.178.94.15:22-147.75.109.163:50068.service: Deactivated successfully. Nov 1 01:28:58.346263 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 01:28:58.346639 systemd-logind[1596]: Session 6 logged out. Waiting for processes to exit. Nov 1 01:28:58.347167 systemd[1]: Started sshd@4-139.178.94.15:22-147.75.109.163:50072.service. Nov 1 01:28:58.347582 systemd-logind[1596]: Removed session 6. Nov 1 01:28:58.383838 sshd[1716]: Accepted publickey for core from 147.75.109.163 port 50072 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 01:28:58.384785 sshd[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:28:58.388126 systemd-logind[1596]: New session 7 of user core. Nov 1 01:28:58.388804 systemd[1]: Started session-7.scope. Nov 1 01:28:58.441760 sshd[1716]: pam_unix(sshd:session): session closed for user core Nov 1 01:28:58.449110 systemd[1]: sshd@4-139.178.94.15:22-147.75.109.163:50072.service: Deactivated successfully. Nov 1 01:28:58.449440 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 01:28:58.449824 systemd-logind[1596]: Session 7 logged out. Waiting for processes to exit. Nov 1 01:28:58.450350 systemd[1]: Started sshd@5-139.178.94.15:22-147.75.109.163:50074.service. Nov 1 01:28:58.450855 systemd-logind[1596]: Removed session 7. Nov 1 01:28:58.486258 sshd[1722]: Accepted publickey for core from 147.75.109.163 port 50074 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 01:28:58.487098 sshd[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:28:58.490074 systemd-logind[1596]: New session 8 of user core. Nov 1 01:28:58.490646 systemd[1]: Started session-8.scope. Nov 1 01:28:58.553864 sshd[1722]: pam_unix(sshd:session): session closed for user core Nov 1 01:28:58.561373 systemd[1]: sshd@5-139.178.94.15:22-147.75.109.163:50074.service: Deactivated successfully. Nov 1 01:28:58.563143 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 01:28:58.565000 systemd-logind[1596]: Session 8 logged out. Waiting for processes to exit. Nov 1 01:28:58.568039 systemd[1]: Started sshd@6-139.178.94.15:22-147.75.109.163:50090.service. Nov 1 01:28:58.570651 systemd-logind[1596]: Removed session 8. Nov 1 01:28:58.673709 sshd[1728]: Accepted publickey for core from 147.75.109.163 port 50090 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 01:28:58.674883 sshd[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:28:58.678700 systemd-logind[1596]: New session 9 of user core. Nov 1 01:28:58.679574 systemd[1]: Started session-9.scope. Nov 1 01:28:58.773829 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 1 01:28:58.774570 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 01:28:58.798854 dbus-daemon[1523]: Ѝ\xe7\xc8\xc1U: received setenforce notice (enforcing=752636112) Nov 1 01:28:58.803925 sudo[1731]: pam_unix(sudo:session): session closed for user root Nov 1 01:28:58.809163 sshd[1728]: pam_unix(sshd:session): session closed for user core Nov 1 01:28:58.816850 systemd[1]: sshd@6-139.178.94.15:22-147.75.109.163:50090.service: Deactivated successfully. Nov 1 01:28:58.818653 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 01:28:58.820686 systemd-logind[1596]: Session 9 logged out. Waiting for processes to exit. Nov 1 01:28:58.823837 systemd[1]: Started sshd@7-139.178.94.15:22-147.75.109.163:50094.service. Nov 1 01:28:58.826750 systemd-logind[1596]: Removed session 9. Nov 1 01:28:58.929560 sshd[1735]: Accepted publickey for core from 147.75.109.163 port 50094 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 01:28:58.930611 sshd[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:28:58.934088 systemd-logind[1596]: New session 10 of user core. Nov 1 01:28:58.934788 systemd[1]: Started session-10.scope. Nov 1 01:28:58.996875 sudo[1739]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 1 01:28:58.997589 sudo[1739]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 01:28:59.005215 sudo[1739]: pam_unix(sudo:session): session closed for user root Nov 1 01:28:59.018517 sudo[1738]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 1 01:28:59.019188 sudo[1738]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 01:28:59.041847 systemd[1]: Stopping audit-rules.service... Nov 1 01:28:59.042000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Nov 1 01:28:59.042663 auditctl[1742]: No rules Nov 1 01:28:59.042830 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 01:28:59.042909 systemd[1]: Stopped audit-rules.service. Nov 1 01:28:59.043688 systemd[1]: Starting audit-rules.service... Nov 1 01:28:59.048149 kernel: kauditd_printk_skb: 145 callbacks suppressed Nov 1 01:28:59.048191 kernel: audit: type=1305 audit(1761960539.042:198): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Nov 1 01:28:59.053307 augenrules[1759]: No rules Nov 1 01:28:59.053603 systemd[1]: Finished audit-rules.service. Nov 1 01:28:59.054147 sudo[1738]: pam_unix(sudo:session): session closed for user root Nov 1 01:28:59.055198 sshd[1735]: pam_unix(sshd:session): session closed for user core Nov 1 01:28:59.057037 systemd[1]: sshd@7-139.178.94.15:22-147.75.109.163:50094.service: Deactivated successfully. Nov 1 01:28:59.057347 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 01:28:59.057748 systemd-logind[1596]: Session 10 logged out. Waiting for processes to exit. Nov 1 01:28:59.058350 systemd[1]: Started sshd@8-139.178.94.15:22-147.75.109.163:50102.service. Nov 1 01:28:59.058796 systemd-logind[1596]: Removed session 10. Nov 1 01:28:59.042000 audit[1742]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff5f74e160 a2=420 a3=0 items=0 ppid=1 pid=1742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:28:59.094889 kernel: audit: type=1300 audit(1761960539.042:198): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff5f74e160 a2=420 a3=0 items=0 ppid=1 pid=1742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:28:59.094961 kernel: audit: type=1327 audit(1761960539.042:198): proctitle=2F7362696E2F617564697463746C002D44 Nov 1 01:28:59.042000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Nov 1 01:28:59.104470 kernel: audit: type=1131 audit(1761960539.042:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:59.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:59.126948 kernel: audit: type=1130 audit(1761960539.053:200): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:59.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:59.149477 kernel: audit: type=1106 audit(1761960539.053:201): pid=1738 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 01:28:59.053000 audit[1738]: USER_END pid=1738 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 01:28:59.155951 sshd[1765]: Accepted publickey for core from 147.75.109.163 port 50102 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 01:28:59.157865 sshd[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:28:59.160214 systemd-logind[1596]: New session 11 of user core. Nov 1 01:28:59.160772 systemd[1]: Started session-11.scope. Nov 1 01:28:59.175585 kernel: audit: type=1104 audit(1761960539.053:202): pid=1738 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 01:28:59.053000 audit[1738]: CRED_DISP pid=1738 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 01:28:59.199278 kernel: audit: type=1106 audit(1761960539.055:203): pid=1735 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:28:59.055000 audit[1735]: USER_END pid=1735 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:28:59.207655 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 01:28:59.207781 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 01:28:59.219755 systemd[1]: Starting docker.service... Nov 1 01:28:59.231661 kernel: audit: type=1104 audit(1761960539.055:204): pid=1735 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:28:59.055000 audit[1735]: CRED_DISP pid=1735 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:28:59.235613 env[1783]: time="2025-11-01T01:28:59.235561114Z" level=info msg="Starting up" Nov 1 01:28:59.236220 env[1783]: time="2025-11-01T01:28:59.236209687Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 01:28:59.236220 env[1783]: time="2025-11-01T01:28:59.236219113Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 01:28:59.236270 env[1783]: time="2025-11-01T01:28:59.236230114Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 01:28:59.236270 env[1783]: time="2025-11-01T01:28:59.236236549Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 01:28:59.237463 env[1783]: time="2025-11-01T01:28:59.237453082Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 01:28:59.237463 env[1783]: time="2025-11-01T01:28:59.237462270Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 01:28:59.237523 env[1783]: time="2025-11-01T01:28:59.237470149Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 01:28:59.237523 env[1783]: time="2025-11-01T01:28:59.237475370Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 01:28:59.257817 kernel: audit: type=1131 audit(1761960539.056:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.94.15:22-147.75.109.163:50094 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:59.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.94.15:22-147.75.109.163:50094 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:59.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.94.15:22-147.75.109.163:50102 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:59.154000 audit[1765]: USER_ACCT pid=1765 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:28:59.156000 audit[1765]: CRED_ACQ pid=1765 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:28:59.156000 audit[1765]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd66fc3cb0 a2=3 a3=0 items=0 ppid=1 pid=1765 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:28:59.156000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 01:28:59.161000 audit[1765]: USER_START pid=1765 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:28:59.162000 audit[1767]: CRED_ACQ pid=1767 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:28:59.206000 audit[1768]: USER_ACCT pid=1768 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 01:28:59.206000 audit[1768]: CRED_REFR pid=1768 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 01:28:59.207000 audit[1768]: USER_START pid=1768 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 01:28:59.290138 env[1783]: time="2025-11-01T01:28:59.290099802Z" level=info msg="Loading containers: start." Nov 1 01:28:59.326000 audit[1830]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1830 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:28:59.326000 audit[1830]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffeffce5c80 a2=0 a3=7ffeffce5c6c items=0 ppid=1783 pid=1830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:28:59.326000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Nov 1 01:28:59.327000 audit[1832]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1832 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:28:59.327000 audit[1832]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc65b46630 a2=0 a3=7ffc65b4661c items=0 ppid=1783 pid=1832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:28:59.327000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Nov 1 01:28:59.329000 audit[1834]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1834 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:28:59.329000 audit[1834]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc590d5770 a2=0 a3=7ffc590d575c items=0 ppid=1783 pid=1834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:28:59.329000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Nov 1 01:28:59.330000 audit[1836]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1836 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:28:59.330000 audit[1836]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff0081d780 a2=0 a3=7fff0081d76c items=0 ppid=1783 pid=1836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:28:59.330000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Nov 1 01:28:59.337000 audit[1838]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1838 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:28:59.337000 audit[1838]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc2d8d4230 a2=0 a3=7ffc2d8d421c items=0 ppid=1783 pid=1838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:28:59.337000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Nov 1 01:28:59.366000 audit[1843]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1843 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:28:59.366000 audit[1843]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffeb2b87a0 a2=0 a3=7fffeb2b878c items=0 ppid=1783 pid=1843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:28:59.366000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Nov 1 01:28:59.370000 audit[1845]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1845 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:28:59.370000 audit[1845]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffeba6ae800 a2=0 a3=7ffeba6ae7ec items=0 ppid=1783 pid=1845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:28:59.370000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Nov 1 01:28:59.372000 audit[1847]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1847 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:28:59.372000 audit[1847]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffd5bc96000 a2=0 a3=7ffd5bc95fec items=0 ppid=1783 pid=1847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:28:59.372000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Nov 1 01:28:59.375000 audit[1849]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1849 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:28:59.375000 audit[1849]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffd00c15140 a2=0 a3=7ffd00c1512c items=0 ppid=1783 pid=1849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:28:59.375000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Nov 1 01:28:59.382000 audit[1853]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1853 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:28:59.382000 audit[1853]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffe80074680 a2=0 a3=7ffe8007466c items=0 ppid=1783 pid=1853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:28:59.382000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Nov 1 01:28:59.388000 audit[1854]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1854 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:28:59.388000 audit[1854]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe9fdd9d70 a2=0 a3=7ffe9fdd9d5c items=0 ppid=1783 pid=1854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:28:59.388000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Nov 1 01:28:59.420472 kernel: Initializing XFRM netlink socket Nov 1 01:28:59.482600 env[1783]: time="2025-11-01T01:28:59.482577842Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Nov 1 01:28:59.495000 audit[1862]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1862 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:28:59.495000 audit[1862]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffe29fdfc70 a2=0 a3=7ffe29fdfc5c items=0 ppid=1783 pid=1862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:28:59.495000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Nov 1 01:28:59.516000 audit[1865]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1865 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:28:59.516000 audit[1865]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffd505e8710 a2=0 a3=7ffd505e86fc items=0 ppid=1783 pid=1865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:28:59.516000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Nov 1 01:28:59.518000 audit[1868]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1868 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:28:59.518000 audit[1868]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe2a087160 a2=0 a3=7ffe2a08714c items=0 ppid=1783 pid=1868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:28:59.518000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Nov 1 01:28:59.520000 audit[1870]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1870 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:28:59.520000 audit[1870]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffdb2d935a0 a2=0 a3=7ffdb2d9358c items=0 ppid=1783 pid=1870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:28:59.520000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Nov 1 01:28:59.522000 audit[1872]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1872 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:28:59.522000 audit[1872]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffc1f440920 a2=0 a3=7ffc1f44090c items=0 ppid=1783 pid=1872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:28:59.522000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Nov 1 01:28:59.524000 audit[1874]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1874 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:28:59.524000 audit[1874]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffdd2b39d80 a2=0 a3=7ffdd2b39d6c items=0 ppid=1783 pid=1874 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:28:59.524000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Nov 1 01:28:59.526000 audit[1876]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1876 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:28:59.526000 audit[1876]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffcf1b2a960 a2=0 a3=7ffcf1b2a94c items=0 ppid=1783 pid=1876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:28:59.526000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Nov 1 01:28:59.539000 audit[1879]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1879 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:28:59.539000 audit[1879]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffc859db090 a2=0 a3=7ffc859db07c items=0 ppid=1783 pid=1879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:28:59.539000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Nov 1 01:28:59.542000 audit[1881]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1881 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:28:59.542000 audit[1881]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffdb3ad0030 a2=0 a3=7ffdb3ad001c items=0 ppid=1783 pid=1881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:28:59.542000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Nov 1 01:28:59.545000 audit[1883]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1883 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:28:59.545000 audit[1883]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffeba6f3f60 a2=0 a3=7ffeba6f3f4c items=0 ppid=1783 pid=1883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:28:59.545000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Nov 1 01:28:59.549000 audit[1885]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1885 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:28:59.549000 audit[1885]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fffcd52e9d0 a2=0 a3=7fffcd52e9bc items=0 ppid=1783 pid=1885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:28:59.549000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Nov 1 01:28:59.551355 systemd-networkd[1315]: docker0: Link UP Nov 1 01:28:59.565000 audit[1889]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1889 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:28:59.565000 audit[1889]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd79435f90 a2=0 a3=7ffd79435f7c items=0 ppid=1783 pid=1889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:28:59.565000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Nov 1 01:28:59.575000 audit[1890]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1890 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:28:59.575000 audit[1890]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd4c240040 a2=0 a3=7ffd4c24002c items=0 ppid=1783 pid=1890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:28:59.575000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Nov 1 01:28:59.578142 env[1783]: time="2025-11-01T01:28:59.578042449Z" level=info msg="Loading containers: done." Nov 1 01:28:59.600455 env[1783]: time="2025-11-01T01:28:59.600331676Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 01:28:59.600791 env[1783]: time="2025-11-01T01:28:59.600754573Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Nov 1 01:28:59.601080 env[1783]: time="2025-11-01T01:28:59.600988231Z" level=info msg="Daemon has completed initialization" Nov 1 01:28:59.608200 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4148272411-merged.mount: Deactivated successfully. Nov 1 01:28:59.627133 systemd[1]: Started docker.service. Nov 1 01:28:59.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:28:59.636981 env[1783]: time="2025-11-01T01:28:59.636931799Z" level=info msg="API listen on /run/docker.sock" Nov 1 01:29:00.418585 env[1561]: time="2025-11-01T01:29:00.418449828Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 1 01:29:00.957419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1059490431.mount: Deactivated successfully. Nov 1 01:29:02.037096 env[1561]: time="2025-11-01T01:29:02.037040733Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:02.037806 env[1561]: time="2025-11-01T01:29:02.037756178Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:02.038906 env[1561]: time="2025-11-01T01:29:02.038871796Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:02.039895 env[1561]: time="2025-11-01T01:29:02.039859195Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:02.040395 env[1561]: time="2025-11-01T01:29:02.040344342Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Nov 1 01:29:02.040935 env[1561]: time="2025-11-01T01:29:02.040875776Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 1 01:29:02.693770 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 01:29:02.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:29:02.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:29:02.693914 systemd[1]: Stopped kubelet.service. Nov 1 01:29:02.694859 systemd[1]: Starting kubelet.service... Nov 1 01:29:02.964685 systemd[1]: Started kubelet.service. Nov 1 01:29:02.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:29:02.986977 kubelet[1945]: E1101 01:29:02.986899 1945 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 01:29:02.988017 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 01:29:02.988088 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 01:29:02.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Nov 1 01:29:03.399886 env[1561]: time="2025-11-01T01:29:03.399830534Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:03.400571 env[1561]: time="2025-11-01T01:29:03.400533328Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:03.401594 env[1561]: time="2025-11-01T01:29:03.401580056Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:03.402521 env[1561]: time="2025-11-01T01:29:03.402507813Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:03.402938 env[1561]: time="2025-11-01T01:29:03.402921829Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Nov 1 01:29:03.403326 env[1561]: time="2025-11-01T01:29:03.403286389Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 1 01:29:04.438651 env[1561]: time="2025-11-01T01:29:04.438604383Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:04.439203 env[1561]: time="2025-11-01T01:29:04.439170344Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:04.440883 env[1561]: time="2025-11-01T01:29:04.440837813Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:04.441882 env[1561]: time="2025-11-01T01:29:04.441836002Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:04.442792 env[1561]: time="2025-11-01T01:29:04.442748965Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Nov 1 01:29:04.443144 env[1561]: time="2025-11-01T01:29:04.443132427Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 1 01:29:05.491972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3442769234.mount: Deactivated successfully. Nov 1 01:29:05.826899 env[1561]: time="2025-11-01T01:29:05.826816329Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:05.827448 env[1561]: time="2025-11-01T01:29:05.827391983Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:05.827957 env[1561]: time="2025-11-01T01:29:05.827921793Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:05.828591 env[1561]: time="2025-11-01T01:29:05.828551853Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:05.828832 env[1561]: time="2025-11-01T01:29:05.828791321Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Nov 1 01:29:05.829173 env[1561]: time="2025-11-01T01:29:05.829159462Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 1 01:29:06.328841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3951328490.mount: Deactivated successfully. Nov 1 01:29:06.714215 systemd[1]: Started sshd@9-139.178.94.15:22-78.128.112.74:41256.service. Nov 1 01:29:06.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.94.15:22-78.128.112.74:41256 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:29:06.719874 kernel: kauditd_printk_skb: 88 callbacks suppressed Nov 1 01:29:06.719950 kernel: audit: type=1130 audit(1761960546.713:244): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.94.15:22-78.128.112.74:41256 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:29:07.336866 env[1561]: time="2025-11-01T01:29:07.336839077Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.12.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:07.337602 env[1561]: time="2025-11-01T01:29:07.337588922Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:07.338938 env[1561]: time="2025-11-01T01:29:07.338926312Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.12.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:07.340034 env[1561]: time="2025-11-01T01:29:07.340021725Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:07.340533 env[1561]: time="2025-11-01T01:29:07.340485879Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Nov 1 01:29:07.340907 env[1561]: time="2025-11-01T01:29:07.340853867Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 1 01:29:07.395310 sshd[1961]: Invalid user user from 78.128.112.74 port 41256 Nov 1 01:29:07.567296 sshd[1961]: pam_faillock(sshd:auth): User unknown Nov 1 01:29:07.568516 sshd[1961]: pam_unix(sshd:auth): check pass; user unknown Nov 1 01:29:07.568612 sshd[1961]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=78.128.112.74 Nov 1 01:29:07.569682 sshd[1961]: pam_faillock(sshd:auth): User unknown Nov 1 01:29:07.569000 audit[1961]: USER_AUTH pid=1961 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="user" exe="/usr/sbin/sshd" hostname=78.128.112.74 addr=78.128.112.74 terminal=ssh res=failed' Nov 1 01:29:07.646458 kernel: audit: type=1100 audit(1761960547.569:245): pid=1961 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="user" exe="/usr/sbin/sshd" hostname=78.128.112.74 addr=78.128.112.74 terminal=ssh res=failed' Nov 1 01:29:07.895277 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3696249495.mount: Deactivated successfully. Nov 1 01:29:07.896505 env[1561]: time="2025-11-01T01:29:07.896485503Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:07.897118 env[1561]: time="2025-11-01T01:29:07.897074435Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:07.897725 env[1561]: time="2025-11-01T01:29:07.897686824Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:07.898362 env[1561]: time="2025-11-01T01:29:07.898321255Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:07.898699 env[1561]: time="2025-11-01T01:29:07.898658677Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Nov 1 01:29:07.899174 env[1561]: time="2025-11-01T01:29:07.899157152Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 1 01:29:09.850147 sshd[1961]: Failed password for invalid user user from 78.128.112.74 port 41256 ssh2 Nov 1 01:29:10.583789 env[1561]: time="2025-11-01T01:29:10.583733353Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.6.4-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:10.584393 env[1561]: time="2025-11-01T01:29:10.584346577Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:10.585739 env[1561]: time="2025-11-01T01:29:10.585697253Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.6.4-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:10.586762 env[1561]: time="2025-11-01T01:29:10.586717619Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:10.587785 env[1561]: time="2025-11-01T01:29:10.587747895Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Nov 1 01:29:11.237330 sshd[1961]: Connection closed by invalid user user 78.128.112.74 port 41256 [preauth] Nov 1 01:29:11.237933 systemd[1]: sshd@9-139.178.94.15:22-78.128.112.74:41256.service: Deactivated successfully. Nov 1 01:29:11.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.94.15:22-78.128.112.74:41256 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:29:11.317449 kernel: audit: type=1131 audit(1761960551.236:246): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.94.15:22-78.128.112.74:41256 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:29:12.700345 systemd[1]: Stopped kubelet.service. Nov 1 01:29:12.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:29:12.701688 systemd[1]: Starting kubelet.service... Nov 1 01:29:12.716432 systemd[1]: Reloading. Nov 1 01:29:12.748978 /usr/lib/systemd/system-generators/torcx-generator[2032]: time="2025-11-01T01:29:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 01:29:12.748993 /usr/lib/systemd/system-generators/torcx-generator[2032]: time="2025-11-01T01:29:12Z" level=info msg="torcx already run" Nov 1 01:29:12.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:29:12.760480 kernel: audit: type=1130 audit(1761960552.699:247): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:29:12.760513 kernel: audit: type=1131 audit(1761960552.699:248): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:29:12.841628 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 01:29:12.841636 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 01:29:12.852808 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 01:29:12.899000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:12.899000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.023942 kernel: audit: type=1400 audit(1761960552.899:249): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.024007 kernel: audit: type=1400 audit(1761960552.899:250): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.024051 kernel: audit: type=1400 audit(1761960552.899:251): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:12.899000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.087018 kernel: audit: type=1400 audit(1761960552.899:252): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:12.899000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.151121 kernel: audit: type=1400 audit(1761960552.899:253): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:12.899000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.215681 kernel: audit: type=1400 audit(1761960552.899:254): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:12.899000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.279970 kernel: audit: type=1400 audit(1761960552.899:255): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:12.899000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.344834 kernel: audit: type=1400 audit(1761960552.899:256): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:12.899000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:12.899000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.085000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.085000 audit: BPF prog-id=46 op=LOAD Nov 1 01:29:13.085000 audit: BPF prog-id=40 op=UNLOAD Nov 1 01:29:13.086000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.086000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.086000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.086000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.086000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.086000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.086000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.086000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.278000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.278000 audit: BPF prog-id=47 op=LOAD Nov 1 01:29:13.278000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.278000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.278000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.278000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.278000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.278000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.278000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.278000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.408000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.408000 audit: BPF prog-id=48 op=LOAD Nov 1 01:29:13.408000 audit: BPF prog-id=30 op=UNLOAD Nov 1 01:29:13.408000 audit: BPF prog-id=31 op=UNLOAD Nov 1 01:29:13.409000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.409000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.409000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.409000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.409000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.409000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.409000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.409000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.409000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.409000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.409000 audit: BPF prog-id=49 op=LOAD Nov 1 01:29:13.409000 audit: BPF prog-id=32 op=UNLOAD Nov 1 01:29:13.409000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.409000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.409000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.409000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.409000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.409000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.409000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.409000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.409000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.409000 audit: BPF prog-id=50 op=LOAD Nov 1 01:29:13.409000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.409000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.409000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.409000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.409000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.409000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.409000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.409000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.409000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.409000 audit: BPF prog-id=51 op=LOAD Nov 1 01:29:13.409000 audit: BPF prog-id=33 op=UNLOAD Nov 1 01:29:13.409000 audit: BPF prog-id=34 op=UNLOAD Nov 1 01:29:13.410000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.410000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.410000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.410000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.410000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.410000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.410000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.410000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.410000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.410000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.410000 audit: BPF prog-id=52 op=LOAD Nov 1 01:29:13.410000 audit: BPF prog-id=42 op=UNLOAD Nov 1 01:29:13.410000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.410000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.410000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.410000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.410000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.410000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.410000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.410000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.410000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.410000 audit: BPF prog-id=53 op=LOAD Nov 1 01:29:13.410000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.410000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.410000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.410000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.410000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.410000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.410000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.410000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.410000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.410000 audit: BPF prog-id=54 op=LOAD Nov 1 01:29:13.410000 audit: BPF prog-id=43 op=UNLOAD Nov 1 01:29:13.410000 audit: BPF prog-id=44 op=UNLOAD Nov 1 01:29:13.411000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.411000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.411000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.411000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.411000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.411000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.411000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.411000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.411000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.411000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.411000 audit: BPF prog-id=55 op=LOAD Nov 1 01:29:13.411000 audit: BPF prog-id=39 op=UNLOAD Nov 1 01:29:13.411000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.411000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.411000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.411000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.411000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.411000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.411000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.411000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.411000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.411000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.411000 audit: BPF prog-id=56 op=LOAD Nov 1 01:29:13.411000 audit: BPF prog-id=41 op=UNLOAD Nov 1 01:29:13.411000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.411000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.411000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.411000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.411000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.412000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.412000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.412000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.412000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.412000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.412000 audit: BPF prog-id=57 op=LOAD Nov 1 01:29:13.412000 audit: BPF prog-id=35 op=UNLOAD Nov 1 01:29:13.412000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.412000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.412000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.412000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.412000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.412000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.412000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.412000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.412000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.412000 audit: BPF prog-id=58 op=LOAD Nov 1 01:29:13.412000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.412000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.412000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.412000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.412000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.412000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.412000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.412000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.412000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.412000 audit: BPF prog-id=59 op=LOAD Nov 1 01:29:13.412000 audit: BPF prog-id=36 op=UNLOAD Nov 1 01:29:13.412000 audit: BPF prog-id=37 op=UNLOAD Nov 1 01:29:13.413000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.413000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.413000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.413000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.413000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.413000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.413000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.413000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.413000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.413000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:13.413000 audit: BPF prog-id=60 op=LOAD Nov 1 01:29:13.413000 audit: BPF prog-id=38 op=UNLOAD Nov 1 01:29:13.420712 systemd[1]: Started kubelet.service. Nov 1 01:29:13.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:29:13.421367 systemd[1]: Stopping kubelet.service... Nov 1 01:29:13.421549 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 01:29:13.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:29:13.421637 systemd[1]: Stopped kubelet.service. Nov 1 01:29:13.422351 systemd[1]: Starting kubelet.service... Nov 1 01:29:13.655731 systemd[1]: Started kubelet.service. Nov 1 01:29:13.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:29:13.681864 kubelet[2100]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 01:29:13.681864 kubelet[2100]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 01:29:13.682147 kubelet[2100]: I1101 01:29:13.681889 2100 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 01:29:13.980889 kubelet[2100]: I1101 01:29:13.980849 2100 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 1 01:29:13.980889 kubelet[2100]: I1101 01:29:13.980860 2100 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 01:29:13.983030 kubelet[2100]: I1101 01:29:13.983001 2100 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 1 01:29:13.983030 kubelet[2100]: I1101 01:29:13.983024 2100 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 01:29:13.983167 kubelet[2100]: I1101 01:29:13.983137 2100 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 01:29:13.986866 kubelet[2100]: I1101 01:29:13.986855 2100 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 01:29:13.987097 kubelet[2100]: E1101 01:29:13.987066 2100 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://139.178.94.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.94.15:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 1 01:29:13.988309 kubelet[2100]: E1101 01:29:13.988296 2100 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 01:29:13.988345 kubelet[2100]: I1101 01:29:13.988321 2100 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 1 01:29:14.008787 kubelet[2100]: I1101 01:29:14.008779 2100 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 1 01:29:14.008921 kubelet[2100]: I1101 01:29:14.008908 2100 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 01:29:14.009001 kubelet[2100]: I1101 01:29:14.008921 2100 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-34cd8b9336","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 01:29:14.009065 kubelet[2100]: I1101 01:29:14.009005 2100 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 01:29:14.009065 kubelet[2100]: I1101 01:29:14.009011 2100 container_manager_linux.go:306] "Creating device plugin manager" Nov 1 01:29:14.009065 kubelet[2100]: I1101 01:29:14.009055 2100 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 1 01:29:14.009973 kubelet[2100]: I1101 01:29:14.009967 2100 state_mem.go:36] "Initialized new in-memory state store" Nov 1 01:29:14.011725 kubelet[2100]: I1101 01:29:14.011674 2100 kubelet.go:475] "Attempting to sync node with API server" Nov 1 01:29:14.011725 kubelet[2100]: I1101 01:29:14.011683 2100 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 01:29:14.011725 kubelet[2100]: I1101 01:29:14.011694 2100 kubelet.go:387] "Adding apiserver pod source" Nov 1 01:29:14.011725 kubelet[2100]: I1101 01:29:14.011721 2100 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 01:29:14.013436 kubelet[2100]: E1101 01:29:14.013416 2100 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.94.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.94.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 01:29:14.013478 kubelet[2100]: E1101 01:29:14.013454 2100 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.94.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-34cd8b9336&limit=500&resourceVersion=0\": dial tcp 139.178.94.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 1 01:29:14.013478 kubelet[2100]: I1101 01:29:14.013468 2100 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 01:29:14.013978 kubelet[2100]: I1101 01:29:14.013967 2100 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 01:29:14.014009 kubelet[2100]: I1101 01:29:14.013988 2100 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 1 01:29:14.014038 kubelet[2100]: W1101 01:29:14.014012 2100 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 01:29:14.015024 kubelet[2100]: I1101 01:29:14.015018 2100 server.go:1262] "Started kubelet" Nov 1 01:29:14.015089 kubelet[2100]: I1101 01:29:14.015060 2100 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 01:29:14.015131 kubelet[2100]: I1101 01:29:14.015107 2100 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 01:29:14.015162 kubelet[2100]: I1101 01:29:14.015142 2100 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 1 01:29:14.015329 kubelet[2100]: I1101 01:29:14.015291 2100 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 01:29:14.015515 kubelet[2100]: I1101 01:29:14.015501 2100 kubelet.go:1567] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins_registry: invalid argument" Nov 1 01:29:14.015000 audit[2100]: AVC avc: denied { mac_admin } for pid=2100 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:14.015000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 01:29:14.015000 audit[2100]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000f16480 a1=c0006913e0 a2=c000f16450 a3=25 items=0 ppid=1 pid=2100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:14.015000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 01:29:14.015000 audit[2100]: AVC avc: denied { mac_admin } for pid=2100 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:14.015000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 01:29:14.015000 audit[2100]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000110b20 a1=c0006913f8 a2=c000f16510 a3=25 items=0 ppid=1 pid=2100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:14.015000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 01:29:14.015883 kubelet[2100]: I1101 01:29:14.015525 2100 kubelet.go:1571] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins: invalid argument" Nov 1 01:29:14.015883 kubelet[2100]: I1101 01:29:14.015557 2100 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 01:29:14.015883 kubelet[2100]: I1101 01:29:14.015604 2100 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 01:29:14.015883 kubelet[2100]: E1101 01:29:14.015631 2100 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-34cd8b9336\" not found" Nov 1 01:29:14.015883 kubelet[2100]: I1101 01:29:14.015654 2100 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 1 01:29:14.015883 kubelet[2100]: I1101 01:29:14.015703 2100 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 1 01:29:14.015883 kubelet[2100]: I1101 01:29:14.015767 2100 reconciler.go:29] "Reconciler: start to sync state" Nov 1 01:29:14.015883 kubelet[2100]: E1101 01:29:14.015800 2100 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.94.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-34cd8b9336?timeout=10s\": dial tcp 139.178.94.15:6443: connect: connection refused" interval="200ms" Nov 1 01:29:14.015883 kubelet[2100]: E1101 01:29:14.015862 2100 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.94.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.94.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 01:29:14.015883 kubelet[2100]: I1101 01:29:14.015876 2100 factory.go:223] Registration of the systemd container factory successfully Nov 1 01:29:14.016152 kubelet[2100]: I1101 01:29:14.015931 2100 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 01:29:14.016152 kubelet[2100]: E1101 01:29:14.016117 2100 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 01:29:14.016389 kubelet[2100]: I1101 01:29:14.016380 2100 factory.go:223] Registration of the containerd container factory successfully Nov 1 01:29:14.017000 audit[2129]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2129 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:29:14.017000 audit[2129]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffcf3a22630 a2=0 a3=7ffcf3a2261c items=0 ppid=2100 pid=2129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:14.017000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Nov 1 01:29:14.018000 audit[2130]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=2130 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:29:14.018000 audit[2130]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffedd89d230 a2=0 a3=7ffedd89d21c items=0 ppid=2100 pid=2130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:14.018000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Nov 1 01:29:14.025000 audit[2133]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=2133 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:29:14.025000 audit[2133]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe3873eaa0 a2=0 a3=7ffe3873ea8c items=0 ppid=2100 pid=2133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:14.025000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Nov 1 01:29:14.026415 kubelet[2100]: I1101 01:29:14.026301 2100 server.go:310] "Adding debug handlers to kubelet server" Nov 1 01:29:14.026000 audit[2135]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=2135 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:29:14.026000 audit[2135]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe49787f30 a2=0 a3=7ffe49787f1c items=0 ppid=2100 pid=2135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:14.026000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Nov 1 01:29:14.029743 kubelet[2100]: E1101 01:29:14.028715 2100 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.94.15:6443/api/v1/namespaces/default/events\": dial tcp 139.178.94.15:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-n-34cd8b9336.1873bdc9e250988c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-34cd8b9336,UID:ci-3510.3.8-n-34cd8b9336,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-34cd8b9336,},FirstTimestamp:2025-11-01 01:29:14.015004812 +0000 UTC m=+0.355066332,LastTimestamp:2025-11-01 01:29:14.015004812 +0000 UTC m=+0.355066332,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-34cd8b9336,}" Nov 1 01:29:14.030000 audit[2138]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2138 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:29:14.030000 audit[2138]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffcb4994720 a2=0 a3=7ffcb499470c items=0 ppid=2100 pid=2138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:14.030000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F380000002D2D737263003132372E Nov 1 01:29:14.031481 kubelet[2100]: I1101 01:29:14.031432 2100 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 1 01:29:14.031000 audit[2139]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=2139 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:29:14.031000 audit[2139]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fffe17a8f90 a2=0 a3=7fffe17a8f7c items=0 ppid=2100 pid=2139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:14.031000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Nov 1 01:29:14.031967 kubelet[2100]: I1101 01:29:14.031937 2100 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 1 01:29:14.031967 kubelet[2100]: I1101 01:29:14.031949 2100 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 1 01:29:14.031967 kubelet[2100]: I1101 01:29:14.031962 2100 kubelet.go:2427] "Starting kubelet main sync loop" Nov 1 01:29:14.031000 audit[2141]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=2141 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:29:14.031000 audit[2141]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffe75f14a0 a2=0 a3=7fffe75f148c items=0 ppid=2100 pid=2141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:14.031000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Nov 1 01:29:14.032099 kubelet[2100]: E1101 01:29:14.031981 2100 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 01:29:14.032173 kubelet[2100]: E1101 01:29:14.032159 2100 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.94.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.94.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 1 01:29:14.032000 audit[2142]: NETFILTER_CFG table=mangle:33 family=10 entries=1 op=nft_register_chain pid=2142 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:29:14.032000 audit[2142]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe2bd02ff0 a2=0 a3=7ffe2bd02fdc items=0 ppid=2100 pid=2142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:14.032000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Nov 1 01:29:14.032000 audit[2143]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=2143 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:29:14.032000 audit[2143]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe91cbdc70 a2=0 a3=7ffe91cbdc5c items=0 ppid=2100 pid=2143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:14.032000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Nov 1 01:29:14.032000 audit[2144]: NETFILTER_CFG table=nat:35 family=10 entries=2 op=nft_register_chain pid=2144 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:29:14.032000 audit[2144]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7fffa71291e0 a2=0 a3=7fffa71291cc items=0 ppid=2100 pid=2144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:14.032000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Nov 1 01:29:14.032000 audit[2145]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_chain pid=2145 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:29:14.032000 audit[2145]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe907b8720 a2=0 a3=7ffe907b870c items=0 ppid=2100 pid=2145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:14.032000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Nov 1 01:29:14.032000 audit[2146]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=2146 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:29:14.032000 audit[2146]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff425d3640 a2=0 a3=7fff425d362c items=0 ppid=2100 pid=2146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:14.032000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Nov 1 01:29:14.115911 kubelet[2100]: E1101 01:29:14.115790 2100 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-34cd8b9336\" not found" Nov 1 01:29:14.132265 kubelet[2100]: E1101 01:29:14.132145 2100 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 1 01:29:14.187650 kubelet[2100]: I1101 01:29:14.187593 2100 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 01:29:14.187650 kubelet[2100]: I1101 01:29:14.187632 2100 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 01:29:14.188293 kubelet[2100]: I1101 01:29:14.187671 2100 state_mem.go:36] "Initialized new in-memory state store" Nov 1 01:29:14.189148 kubelet[2100]: I1101 01:29:14.189117 2100 policy_none.go:49] "None policy: Start" Nov 1 01:29:14.189251 kubelet[2100]: I1101 01:29:14.189156 2100 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 1 01:29:14.189251 kubelet[2100]: I1101 01:29:14.189185 2100 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 1 01:29:14.190289 kubelet[2100]: I1101 01:29:14.190253 2100 policy_none.go:47] "Start" Nov 1 01:29:14.198537 systemd[1]: Created slice kubepods.slice. Nov 1 01:29:14.209019 systemd[1]: Created slice kubepods-burstable.slice. Nov 1 01:29:14.215955 kubelet[2100]: E1101 01:29:14.215913 2100 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-34cd8b9336\" not found" Nov 1 01:29:14.215953 systemd[1]: Created slice kubepods-besteffort.slice. Nov 1 01:29:14.216099 kubelet[2100]: E1101 01:29:14.216072 2100 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.94.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-34cd8b9336?timeout=10s\": dial tcp 139.178.94.15:6443: connect: connection refused" interval="400ms" Nov 1 01:29:14.224024 kubelet[2100]: E1101 01:29:14.223986 2100 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 01:29:14.224024 kubelet[2100]: E1101 01:29:14.224014 2100 server.go:96] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/device-plugins/: invalid argument" path="/var/lib/kubelet/device-plugins/" Nov 1 01:29:14.223000 audit[2100]: AVC avc: denied { mac_admin } for pid=2100 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:14.223000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 01:29:14.223000 audit[2100]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000f170e0 a1=c000405380 a2=c000f170b0 a3=25 items=0 ppid=1 pid=2100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:14.223000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 01:29:14.224218 kubelet[2100]: I1101 01:29:14.224072 2100 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 01:29:14.224218 kubelet[2100]: I1101 01:29:14.224081 2100 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 01:29:14.224218 kubelet[2100]: I1101 01:29:14.224191 2100 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 01:29:14.224439 kubelet[2100]: E1101 01:29:14.224427 2100 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 01:29:14.224481 kubelet[2100]: E1101 01:29:14.224446 2100 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.8-n-34cd8b9336\" not found" Nov 1 01:29:14.327811 kubelet[2100]: I1101 01:29:14.327606 2100 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:14.328450 kubelet[2100]: E1101 01:29:14.328338 2100 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.94.15:6443/api/v1/nodes\": dial tcp 139.178.94.15:6443: connect: connection refused" node="ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:14.338658 systemd[1]: Created slice kubepods-burstable-pod733ca44bab5ecc27704211903449df05.slice. Nov 1 01:29:14.363752 kubelet[2100]: E1101 01:29:14.363726 2100 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-34cd8b9336\" not found" node="ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:14.366321 systemd[1]: Created slice kubepods-burstable-podc089f6f004a1077f804153ebb6f7d803.slice. Nov 1 01:29:14.367490 kubelet[2100]: E1101 01:29:14.367475 2100 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-34cd8b9336\" not found" node="ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:14.369172 systemd[1]: Created slice kubepods-burstable-podfb0002f3582128fc4d54a121b245e19e.slice. Nov 1 01:29:14.370468 kubelet[2100]: E1101 01:29:14.370421 2100 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-34cd8b9336\" not found" node="ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:14.416430 kubelet[2100]: I1101 01:29:14.416300 2100 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c089f6f004a1077f804153ebb6f7d803-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-34cd8b9336\" (UID: \"c089f6f004a1077f804153ebb6f7d803\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:14.416430 kubelet[2100]: I1101 01:29:14.416381 2100 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c089f6f004a1077f804153ebb6f7d803-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-34cd8b9336\" (UID: \"c089f6f004a1077f804153ebb6f7d803\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:14.416827 kubelet[2100]: I1101 01:29:14.416460 2100 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c089f6f004a1077f804153ebb6f7d803-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-34cd8b9336\" (UID: \"c089f6f004a1077f804153ebb6f7d803\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:14.416827 kubelet[2100]: I1101 01:29:14.416505 2100 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c089f6f004a1077f804153ebb6f7d803-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-34cd8b9336\" (UID: \"c089f6f004a1077f804153ebb6f7d803\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:14.416827 kubelet[2100]: I1101 01:29:14.416609 2100 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fb0002f3582128fc4d54a121b245e19e-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-34cd8b9336\" (UID: \"fb0002f3582128fc4d54a121b245e19e\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:14.416827 kubelet[2100]: I1101 01:29:14.416684 2100 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c089f6f004a1077f804153ebb6f7d803-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-34cd8b9336\" (UID: \"c089f6f004a1077f804153ebb6f7d803\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:14.416827 kubelet[2100]: I1101 01:29:14.416735 2100 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/733ca44bab5ecc27704211903449df05-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-34cd8b9336\" (UID: \"733ca44bab5ecc27704211903449df05\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:14.417271 kubelet[2100]: I1101 01:29:14.416798 2100 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/733ca44bab5ecc27704211903449df05-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-34cd8b9336\" (UID: \"733ca44bab5ecc27704211903449df05\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:14.417271 kubelet[2100]: I1101 01:29:14.416839 2100 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/733ca44bab5ecc27704211903449df05-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-34cd8b9336\" (UID: \"733ca44bab5ecc27704211903449df05\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:14.532831 kubelet[2100]: I1101 01:29:14.532740 2100 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:14.533591 kubelet[2100]: E1101 01:29:14.533498 2100 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.94.15:6443/api/v1/nodes\": dial tcp 139.178.94.15:6443: connect: connection refused" node="ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:14.618236 kubelet[2100]: E1101 01:29:14.618001 2100 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.94.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-34cd8b9336?timeout=10s\": dial tcp 139.178.94.15:6443: connect: connection refused" interval="800ms" Nov 1 01:29:14.679018 env[1561]: time="2025-11-01T01:29:14.678917003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-34cd8b9336,Uid:733ca44bab5ecc27704211903449df05,Namespace:kube-system,Attempt:0,}" Nov 1 01:29:14.679747 env[1561]: time="2025-11-01T01:29:14.679094108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-34cd8b9336,Uid:c089f6f004a1077f804153ebb6f7d803,Namespace:kube-system,Attempt:0,}" Nov 1 01:29:14.680236 env[1561]: time="2025-11-01T01:29:14.680161356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-34cd8b9336,Uid:fb0002f3582128fc4d54a121b245e19e,Namespace:kube-system,Attempt:0,}" Nov 1 01:29:14.923696 kubelet[2100]: E1101 01:29:14.923463 2100 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.94.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.94.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 01:29:14.937599 kubelet[2100]: I1101 01:29:14.937546 2100 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:14.938355 kubelet[2100]: E1101 01:29:14.938259 2100 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.94.15:6443/api/v1/nodes\": dial tcp 139.178.94.15:6443: connect: connection refused" node="ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:15.154121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount300832981.mount: Deactivated successfully. Nov 1 01:29:15.155738 env[1561]: time="2025-11-01T01:29:15.155720627Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:15.156704 env[1561]: time="2025-11-01T01:29:15.156662642Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:15.157073 env[1561]: time="2025-11-01T01:29:15.157041181Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:15.157600 env[1561]: time="2025-11-01T01:29:15.157561156Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:15.158194 env[1561]: time="2025-11-01T01:29:15.158153514Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:15.158931 env[1561]: time="2025-11-01T01:29:15.158892572Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:15.159732 env[1561]: time="2025-11-01T01:29:15.159691151Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:15.161302 env[1561]: time="2025-11-01T01:29:15.161261761Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:15.162581 env[1561]: time="2025-11-01T01:29:15.162541830Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:15.163288 env[1561]: time="2025-11-01T01:29:15.163277858Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:15.163939 env[1561]: time="2025-11-01T01:29:15.163916193Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:15.164840 env[1561]: time="2025-11-01T01:29:15.164804600Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:15.169757 env[1561]: time="2025-11-01T01:29:15.169696515Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:29:15.169757 env[1561]: time="2025-11-01T01:29:15.169726815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:29:15.169757 env[1561]: time="2025-11-01T01:29:15.169738028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:29:15.169867 env[1561]: time="2025-11-01T01:29:15.169821394Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/20326f1c938c5aba6a92496c8f9a1e8aa5fb32148a6f8a92bcfd4dbfa1d0f053 pid=2158 runtime=io.containerd.runc.v2 Nov 1 01:29:15.171902 env[1561]: time="2025-11-01T01:29:15.171867890Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:29:15.171902 env[1561]: time="2025-11-01T01:29:15.171889980Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:29:15.171902 env[1561]: time="2025-11-01T01:29:15.171896828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:29:15.172016 env[1561]: time="2025-11-01T01:29:15.171954514Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3446410d0d5f1586c694ba2c42631891da4472faf2fd1ea0ca659b6d119c5a54 pid=2175 runtime=io.containerd.runc.v2 Nov 1 01:29:15.172327 env[1561]: time="2025-11-01T01:29:15.172301427Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:29:15.172370 env[1561]: time="2025-11-01T01:29:15.172323036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:29:15.172370 env[1561]: time="2025-11-01T01:29:15.172335491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:29:15.172419 env[1561]: time="2025-11-01T01:29:15.172406194Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/55fa9409372669df56526afc14c1d08f5cb4573d4d9f35f21bf0e327717dcd31 pid=2183 runtime=io.containerd.runc.v2 Nov 1 01:29:15.175925 systemd[1]: Started cri-containerd-20326f1c938c5aba6a92496c8f9a1e8aa5fb32148a6f8a92bcfd4dbfa1d0f053.scope. Nov 1 01:29:15.178578 systemd[1]: Started cri-containerd-3446410d0d5f1586c694ba2c42631891da4472faf2fd1ea0ca659b6d119c5a54.scope. Nov 1 01:29:15.179388 systemd[1]: Started cri-containerd-55fa9409372669df56526afc14c1d08f5cb4573d4d9f35f21bf0e327717dcd31.scope. Nov 1 01:29:15.181000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.181000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.181000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.181000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.181000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.181000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.181000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.181000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.181000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.181000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.181000 audit: BPF prog-id=61 op=LOAD Nov 1 01:29:15.181000 audit[2173]: AVC avc: denied { bpf } for pid=2173 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.181000 audit[2173]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000199c48 a2=10 a3=1c items=0 ppid=2158 pid=2173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:15.181000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230333236663163393338633561626136613932343936633866396131 Nov 1 01:29:15.181000 audit[2173]: AVC avc: denied { perfmon } for pid=2173 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.181000 audit[2173]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001996b0 a2=3c a3=c items=0 ppid=2158 pid=2173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:15.181000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230333236663163393338633561626136613932343936633866396131 Nov 1 01:29:15.181000 audit[2173]: AVC avc: denied { bpf } for pid=2173 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.181000 audit[2173]: AVC avc: denied { bpf } for pid=2173 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.181000 audit[2173]: AVC avc: denied { bpf } for pid=2173 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.181000 audit[2173]: AVC avc: denied { perfmon } for pid=2173 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.181000 audit[2173]: AVC avc: denied { perfmon } for pid=2173 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.181000 audit[2173]: AVC avc: denied { perfmon } for pid=2173 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.181000 audit[2173]: AVC avc: denied { perfmon } for pid=2173 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.181000 audit[2173]: AVC avc: denied { perfmon } for pid=2173 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.181000 audit[2173]: AVC avc: denied { bpf } for pid=2173 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.181000 audit[2173]: AVC avc: denied { bpf } for pid=2173 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.181000 audit: BPF prog-id=62 op=LOAD Nov 1 01:29:15.181000 audit[2173]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001999d8 a2=78 a3=c000279d90 items=0 ppid=2158 pid=2173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:15.181000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230333236663163393338633561626136613932343936633866396131 Nov 1 01:29:15.181000 audit[2173]: AVC avc: denied { bpf } for pid=2173 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.181000 audit[2173]: AVC avc: denied { bpf } for pid=2173 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.181000 audit[2173]: AVC avc: denied { perfmon } for pid=2173 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.181000 audit[2173]: AVC avc: denied { perfmon } for pid=2173 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.181000 audit[2173]: AVC avc: denied { perfmon } for pid=2173 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.181000 audit[2173]: AVC avc: denied { perfmon } for pid=2173 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.181000 audit[2173]: AVC avc: denied { perfmon } for pid=2173 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.181000 audit[2173]: AVC avc: denied { bpf } for pid=2173 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.181000 audit[2173]: AVC avc: denied { bpf } for pid=2173 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.181000 audit: BPF prog-id=63 op=LOAD Nov 1 01:29:15.181000 audit[2173]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000199770 a2=78 a3=c000279dd8 items=0 ppid=2158 pid=2173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:15.181000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230333236663163393338633561626136613932343936633866396131 Nov 1 01:29:15.181000 audit: BPF prog-id=63 op=UNLOAD Nov 1 01:29:15.181000 audit: BPF prog-id=62 op=UNLOAD Nov 1 01:29:15.181000 audit[2173]: AVC avc: denied { bpf } for pid=2173 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.181000 audit[2173]: AVC avc: denied { bpf } for pid=2173 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.181000 audit[2173]: AVC avc: denied { bpf } for pid=2173 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.181000 audit[2173]: AVC avc: denied { perfmon } for pid=2173 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.181000 audit[2173]: AVC avc: denied { perfmon } for pid=2173 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.181000 audit[2173]: AVC avc: denied { perfmon } for pid=2173 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.181000 audit[2173]: AVC avc: denied { perfmon } for pid=2173 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.181000 audit[2173]: AVC avc: denied { perfmon } for pid=2173 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.181000 audit[2173]: AVC avc: denied { bpf } for pid=2173 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.181000 audit[2173]: AVC avc: denied { bpf } for pid=2173 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.181000 audit: BPF prog-id=64 op=LOAD Nov 1 01:29:15.181000 audit[2173]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000199c30 a2=78 a3=c0003ec1e8 items=0 ppid=2158 pid=2173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:15.181000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230333236663163393338633561626136613932343936633866396131 Nov 1 01:29:15.184000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.184000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.184000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.184000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.184000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.184000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.184000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.184000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.184000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.184000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.184000 audit: BPF prog-id=65 op=LOAD Nov 1 01:29:15.185000 audit[2202]: AVC avc: denied { bpf } for pid=2202 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2202]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=2175 pid=2202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:15.185000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334343634313064306435663135383663363934626132633432363331 Nov 1 01:29:15.185000 audit[2202]: AVC avc: denied { perfmon } for pid=2202 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2202]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=2175 pid=2202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:15.185000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334343634313064306435663135383663363934626132633432363331 Nov 1 01:29:15.185000 audit[2202]: AVC avc: denied { bpf } for pid=2202 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2202]: AVC avc: denied { bpf } for pid=2202 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2202]: AVC avc: denied { bpf } for pid=2202 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2202]: AVC avc: denied { perfmon } for pid=2202 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2202]: AVC avc: denied { perfmon } for pid=2202 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2202]: AVC avc: denied { perfmon } for pid=2202 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2202]: AVC avc: denied { perfmon } for pid=2202 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2202]: AVC avc: denied { perfmon } for pid=2202 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2202]: AVC avc: denied { bpf } for pid=2202 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2202]: AVC avc: denied { bpf } for pid=2202 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit: BPF prog-id=66 op=LOAD Nov 1 01:29:15.185000 audit[2202]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001979d8 a2=78 a3=c000304cd0 items=0 ppid=2175 pid=2202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:15.185000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334343634313064306435663135383663363934626132633432363331 Nov 1 01:29:15.185000 audit[2202]: AVC avc: denied { bpf } for pid=2202 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2202]: AVC avc: denied { bpf } for pid=2202 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2202]: AVC avc: denied { perfmon } for pid=2202 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2202]: AVC avc: denied { perfmon } for pid=2202 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2202]: AVC avc: denied { perfmon } for pid=2202 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2202]: AVC avc: denied { perfmon } for pid=2202 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2202]: AVC avc: denied { perfmon } for pid=2202 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2202]: AVC avc: denied { bpf } for pid=2202 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2202]: AVC avc: denied { bpf } for pid=2202 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit: BPF prog-id=67 op=LOAD Nov 1 01:29:15.185000 audit[2202]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000197770 a2=78 a3=c000304d18 items=0 ppid=2175 pid=2202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:15.185000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334343634313064306435663135383663363934626132633432363331 Nov 1 01:29:15.185000 audit: BPF prog-id=67 op=UNLOAD Nov 1 01:29:15.185000 audit: BPF prog-id=66 op=UNLOAD Nov 1 01:29:15.185000 audit[2202]: AVC avc: denied { bpf } for pid=2202 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2202]: AVC avc: denied { bpf } for pid=2202 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2202]: AVC avc: denied { bpf } for pid=2202 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2202]: AVC avc: denied { perfmon } for pid=2202 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2202]: AVC avc: denied { perfmon } for pid=2202 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2202]: AVC avc: denied { perfmon } for pid=2202 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2202]: AVC avc: denied { perfmon } for pid=2202 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2202]: AVC avc: denied { perfmon } for pid=2202 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2202]: AVC avc: denied { bpf } for pid=2202 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2202]: AVC avc: denied { bpf } for pid=2202 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit: BPF prog-id=68 op=LOAD Nov 1 01:29:15.185000 audit[2202]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000197c30 a2=78 a3=c000305128 items=0 ppid=2175 pid=2202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:15.185000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334343634313064306435663135383663363934626132633432363331 Nov 1 01:29:15.185000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit: BPF prog-id=69 op=LOAD Nov 1 01:29:15.185000 audit[2203]: AVC avc: denied { bpf } for pid=2203 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2203]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000145c48 a2=10 a3=1c items=0 ppid=2183 pid=2203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:15.185000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535666139343039333732363639646635363532366166633134633164 Nov 1 01:29:15.185000 audit[2203]: AVC avc: denied { perfmon } for pid=2203 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2203]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001456b0 a2=3c a3=c items=0 ppid=2183 pid=2203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:15.185000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535666139343039333732363639646635363532366166633134633164 Nov 1 01:29:15.185000 audit[2203]: AVC avc: denied { bpf } for pid=2203 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2203]: AVC avc: denied { bpf } for pid=2203 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2203]: AVC avc: denied { bpf } for pid=2203 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2203]: AVC avc: denied { perfmon } for pid=2203 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2203]: AVC avc: denied { perfmon } for pid=2203 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2203]: AVC avc: denied { perfmon } for pid=2203 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2203]: AVC avc: denied { perfmon } for pid=2203 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2203]: AVC avc: denied { perfmon } for pid=2203 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2203]: AVC avc: denied { bpf } for pid=2203 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2203]: AVC avc: denied { bpf } for pid=2203 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit: BPF prog-id=70 op=LOAD Nov 1 01:29:15.185000 audit[2203]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001459d8 a2=78 a3=c000215cc0 items=0 ppid=2183 pid=2203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:15.185000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535666139343039333732363639646635363532366166633134633164 Nov 1 01:29:15.185000 audit[2203]: AVC avc: denied { bpf } for pid=2203 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2203]: AVC avc: denied { bpf } for pid=2203 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2203]: AVC avc: denied { perfmon } for pid=2203 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2203]: AVC avc: denied { perfmon } for pid=2203 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2203]: AVC avc: denied { perfmon } for pid=2203 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2203]: AVC avc: denied { perfmon } for pid=2203 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2203]: AVC avc: denied { perfmon } for pid=2203 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2203]: AVC avc: denied { bpf } for pid=2203 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2203]: AVC avc: denied { bpf } for pid=2203 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit: BPF prog-id=71 op=LOAD Nov 1 01:29:15.185000 audit[2203]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000145770 a2=78 a3=c000215d08 items=0 ppid=2183 pid=2203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:15.185000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535666139343039333732363639646635363532366166633134633164 Nov 1 01:29:15.185000 audit: BPF prog-id=71 op=UNLOAD Nov 1 01:29:15.185000 audit: BPF prog-id=70 op=UNLOAD Nov 1 01:29:15.185000 audit[2203]: AVC avc: denied { bpf } for pid=2203 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2203]: AVC avc: denied { bpf } for pid=2203 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2203]: AVC avc: denied { bpf } for pid=2203 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2203]: AVC avc: denied { perfmon } for pid=2203 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2203]: AVC avc: denied { perfmon } for pid=2203 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2203]: AVC avc: denied { perfmon } for pid=2203 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2203]: AVC avc: denied { perfmon } for pid=2203 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2203]: AVC avc: denied { perfmon } for pid=2203 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2203]: AVC avc: denied { bpf } for pid=2203 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit[2203]: AVC avc: denied { bpf } for pid=2203 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.185000 audit: BPF prog-id=72 op=LOAD Nov 1 01:29:15.185000 audit[2203]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000145c30 a2=78 a3=c0003ca118 items=0 ppid=2183 pid=2203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:15.185000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535666139343039333732363639646635363532366166633134633164 Nov 1 01:29:15.199523 env[1561]: time="2025-11-01T01:29:15.199494678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-34cd8b9336,Uid:733ca44bab5ecc27704211903449df05,Namespace:kube-system,Attempt:0,} returns sandbox id \"20326f1c938c5aba6a92496c8f9a1e8aa5fb32148a6f8a92bcfd4dbfa1d0f053\"" Nov 1 01:29:15.201906 env[1561]: time="2025-11-01T01:29:15.201889530Z" level=info msg="CreateContainer within sandbox \"20326f1c938c5aba6a92496c8f9a1e8aa5fb32148a6f8a92bcfd4dbfa1d0f053\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 01:29:15.206678 env[1561]: time="2025-11-01T01:29:15.206650740Z" level=info msg="CreateContainer within sandbox \"20326f1c938c5aba6a92496c8f9a1e8aa5fb32148a6f8a92bcfd4dbfa1d0f053\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e65bbc682cad9e0bba2d0dfd2fcaa3900c69ca254bc525e5b9b94954163079d6\"" Nov 1 01:29:15.206993 env[1561]: time="2025-11-01T01:29:15.206975828Z" level=info msg="StartContainer for \"e65bbc682cad9e0bba2d0dfd2fcaa3900c69ca254bc525e5b9b94954163079d6\"" Nov 1 01:29:15.208669 env[1561]: time="2025-11-01T01:29:15.208650378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-34cd8b9336,Uid:fb0002f3582128fc4d54a121b245e19e,Namespace:kube-system,Attempt:0,} returns sandbox id \"3446410d0d5f1586c694ba2c42631891da4472faf2fd1ea0ca659b6d119c5a54\"" Nov 1 01:29:15.208959 env[1561]: time="2025-11-01T01:29:15.208946398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-34cd8b9336,Uid:c089f6f004a1077f804153ebb6f7d803,Namespace:kube-system,Attempt:0,} returns sandbox id \"55fa9409372669df56526afc14c1d08f5cb4573d4d9f35f21bf0e327717dcd31\"" Nov 1 01:29:15.210329 env[1561]: time="2025-11-01T01:29:15.210315857Z" level=info msg="CreateContainer within sandbox \"3446410d0d5f1586c694ba2c42631891da4472faf2fd1ea0ca659b6d119c5a54\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 01:29:15.210778 env[1561]: time="2025-11-01T01:29:15.210764353Z" level=info msg="CreateContainer within sandbox \"55fa9409372669df56526afc14c1d08f5cb4573d4d9f35f21bf0e327717dcd31\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 01:29:15.214970 systemd[1]: Started cri-containerd-e65bbc682cad9e0bba2d0dfd2fcaa3900c69ca254bc525e5b9b94954163079d6.scope. Nov 1 01:29:15.215190 env[1561]: time="2025-11-01T01:29:15.215107345Z" level=info msg="CreateContainer within sandbox \"3446410d0d5f1586c694ba2c42631891da4472faf2fd1ea0ca659b6d119c5a54\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d3c24cdae91ed482475c7d89356c3ef129c7e4cac2120d8407948711c1088442\"" Nov 1 01:29:15.215314 env[1561]: time="2025-11-01T01:29:15.215297690Z" level=info msg="StartContainer for \"d3c24cdae91ed482475c7d89356c3ef129c7e4cac2120d8407948711c1088442\"" Nov 1 01:29:15.215987 env[1561]: time="2025-11-01T01:29:15.215967786Z" level=info msg="CreateContainer within sandbox \"55fa9409372669df56526afc14c1d08f5cb4573d4d9f35f21bf0e327717dcd31\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"28c5448870a41bfbf64344db4dcc5fb56a7484bc0868ba4fe58b68778c948abd\"" Nov 1 01:29:15.216155 env[1561]: time="2025-11-01T01:29:15.216140869Z" level=info msg="StartContainer for \"28c5448870a41bfbf64344db4dcc5fb56a7484bc0868ba4fe58b68778c948abd\"" Nov 1 01:29:15.219609 kubelet[2100]: E1101 01:29:15.219587 2100 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.94.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.94.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 1 01:29:15.220000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.220000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.220000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.220000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.220000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.220000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.220000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.220000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.220000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.220000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.220000 audit: BPF prog-id=73 op=LOAD Nov 1 01:29:15.221000 audit[2272]: AVC avc: denied { bpf } for pid=2272 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.221000 audit[2272]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000145c48 a2=10 a3=1c items=0 ppid=2158 pid=2272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:15.221000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536356262633638326361643965306262613264306466643266636161 Nov 1 01:29:15.221000 audit[2272]: AVC avc: denied { perfmon } for pid=2272 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.221000 audit[2272]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001456b0 a2=3c a3=8 items=0 ppid=2158 pid=2272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:15.221000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536356262633638326361643965306262613264306466643266636161 Nov 1 01:29:15.221000 audit[2272]: AVC avc: denied { bpf } for pid=2272 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.221000 audit[2272]: AVC avc: denied { bpf } for pid=2272 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.221000 audit[2272]: AVC avc: denied { bpf } for pid=2272 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.221000 audit[2272]: AVC avc: denied { perfmon } for pid=2272 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.221000 audit[2272]: AVC avc: denied { perfmon } for pid=2272 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.221000 audit[2272]: AVC avc: denied { perfmon } for pid=2272 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.221000 audit[2272]: AVC avc: denied { perfmon } for pid=2272 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.221000 audit[2272]: AVC avc: denied { perfmon } for pid=2272 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.221000 audit[2272]: AVC avc: denied { bpf } for pid=2272 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.221000 audit[2272]: AVC avc: denied { bpf } for pid=2272 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.221000 audit: BPF prog-id=74 op=LOAD Nov 1 01:29:15.221000 audit[2272]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001459d8 a2=78 a3=c0000248d0 items=0 ppid=2158 pid=2272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:15.221000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536356262633638326361643965306262613264306466643266636161 Nov 1 01:29:15.221000 audit[2272]: AVC avc: denied { bpf } for pid=2272 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.221000 audit[2272]: AVC avc: denied { bpf } for pid=2272 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.221000 audit[2272]: AVC avc: denied { perfmon } for pid=2272 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.221000 audit[2272]: AVC avc: denied { perfmon } for pid=2272 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.221000 audit[2272]: AVC avc: denied { perfmon } for pid=2272 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.221000 audit[2272]: AVC avc: denied { perfmon } for pid=2272 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.221000 audit[2272]: AVC avc: denied { perfmon } for pid=2272 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.221000 audit[2272]: AVC avc: denied { bpf } for pid=2272 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.221000 audit[2272]: AVC avc: denied { bpf } for pid=2272 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.221000 audit: BPF prog-id=75 op=LOAD Nov 1 01:29:15.221000 audit[2272]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000145770 a2=78 a3=c000024918 items=0 ppid=2158 pid=2272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:15.221000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536356262633638326361643965306262613264306466643266636161 Nov 1 01:29:15.221000 audit: BPF prog-id=75 op=UNLOAD Nov 1 01:29:15.221000 audit: BPF prog-id=74 op=UNLOAD Nov 1 01:29:15.221000 audit[2272]: AVC avc: denied { bpf } for pid=2272 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.221000 audit[2272]: AVC avc: denied { bpf } for pid=2272 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.221000 audit[2272]: AVC avc: denied { bpf } for pid=2272 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.221000 audit[2272]: AVC avc: denied { perfmon } for pid=2272 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.221000 audit[2272]: AVC avc: denied { perfmon } for pid=2272 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.221000 audit[2272]: AVC avc: denied { perfmon } for pid=2272 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.221000 audit[2272]: AVC avc: denied { perfmon } for pid=2272 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.221000 audit[2272]: AVC avc: denied { perfmon } for pid=2272 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.221000 audit[2272]: AVC avc: denied { bpf } for pid=2272 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.221000 audit[2272]: AVC avc: denied { bpf } for pid=2272 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.221000 audit: BPF prog-id=76 op=LOAD Nov 1 01:29:15.221000 audit[2272]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000145c30 a2=78 a3=c000024d28 items=0 ppid=2158 pid=2272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:15.221000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536356262633638326361643965306262613264306466643266636161 Nov 1 01:29:15.223878 systemd[1]: Started cri-containerd-28c5448870a41bfbf64344db4dcc5fb56a7484bc0868ba4fe58b68778c948abd.scope. Nov 1 01:29:15.224405 systemd[1]: Started cri-containerd-d3c24cdae91ed482475c7d89356c3ef129c7e4cac2120d8407948711c1088442.scope. Nov 1 01:29:15.228000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.228000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.228000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.228000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.228000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.228000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.228000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.228000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.228000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.229000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.229000 audit: BPF prog-id=77 op=LOAD Nov 1 01:29:15.229000 audit[2302]: AVC avc: denied { bpf } for pid=2302 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.229000 audit[2302]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=2175 pid=2302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:15.229000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433633234636461653931656434383234373563376438393335366333 Nov 1 01:29:15.229000 audit[2302]: AVC avc: denied { perfmon } for pid=2302 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.229000 audit[2302]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=8 items=0 ppid=2175 pid=2302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:15.229000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433633234636461653931656434383234373563376438393335366333 Nov 1 01:29:15.229000 audit[2302]: AVC avc: denied { bpf } for pid=2302 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.229000 audit[2302]: AVC avc: denied { bpf } for pid=2302 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.229000 audit[2302]: AVC avc: denied { bpf } for pid=2302 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.229000 audit[2302]: AVC avc: denied { perfmon } for pid=2302 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.229000 audit[2302]: AVC avc: denied { perfmon } for pid=2302 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.229000 audit[2302]: AVC avc: denied { perfmon } for pid=2302 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.229000 audit[2302]: AVC avc: denied { perfmon } for pid=2302 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.229000 audit[2302]: AVC avc: denied { perfmon } for pid=2302 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.229000 audit[2302]: AVC avc: denied { bpf } for pid=2302 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.229000 audit[2302]: AVC avc: denied { bpf } for pid=2302 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.229000 audit: BPF prog-id=78 op=LOAD Nov 1 01:29:15.229000 audit[2302]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c000319b90 items=0 ppid=2175 pid=2302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:15.229000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433633234636461653931656434383234373563376438393335366333 Nov 1 01:29:15.229000 audit[2302]: AVC avc: denied { bpf } for pid=2302 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.229000 audit[2302]: AVC avc: denied { bpf } for pid=2302 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.229000 audit[2302]: AVC avc: denied { perfmon } for pid=2302 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.229000 audit[2302]: AVC avc: denied { perfmon } for pid=2302 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.229000 audit[2302]: AVC avc: denied { perfmon } for pid=2302 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.229000 audit[2302]: AVC avc: denied { perfmon } for pid=2302 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.229000 audit[2302]: AVC avc: denied { perfmon } for pid=2302 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.229000 audit[2302]: AVC avc: denied { bpf } for pid=2302 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.229000 audit[2302]: AVC avc: denied { bpf } for pid=2302 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.229000 audit: BPF prog-id=79 op=LOAD Nov 1 01:29:15.229000 audit[2302]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c000319bd8 items=0 ppid=2175 pid=2302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:15.229000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433633234636461653931656434383234373563376438393335366333 Nov 1 01:29:15.229000 audit: BPF prog-id=79 op=UNLOAD Nov 1 01:29:15.229000 audit: BPF prog-id=78 op=UNLOAD Nov 1 01:29:15.229000 audit[2302]: AVC avc: denied { bpf } for pid=2302 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.229000 audit[2302]: AVC avc: denied { bpf } for pid=2302 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.229000 audit[2302]: AVC avc: denied { bpf } for pid=2302 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.229000 audit[2302]: AVC avc: denied { perfmon } for pid=2302 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.229000 audit[2302]: AVC avc: denied { perfmon } for pid=2302 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.229000 audit[2302]: AVC avc: denied { perfmon } for pid=2302 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.229000 audit[2302]: AVC avc: denied { perfmon } for pid=2302 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.229000 audit[2302]: AVC avc: denied { perfmon } for pid=2302 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.229000 audit[2302]: AVC avc: denied { bpf } for pid=2302 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.229000 audit[2302]: AVC avc: denied { bpf } for pid=2302 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.229000 audit: BPF prog-id=80 op=LOAD Nov 1 01:29:15.229000 audit[2302]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c000319fe8 items=0 ppid=2175 pid=2302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:15.229000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433633234636461653931656434383234373563376438393335366333 Nov 1 01:29:15.230000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.230000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.230000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.230000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.230000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.230000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.230000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.230000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.230000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.230000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.230000 audit: BPF prog-id=81 op=LOAD Nov 1 01:29:15.230000 audit[2303]: AVC avc: denied { bpf } for pid=2303 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.230000 audit[2303]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000147c48 a2=10 a3=1c items=0 ppid=2183 pid=2303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:15.230000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238633534343838373061343162666266363433343464623464636335 Nov 1 01:29:15.230000 audit[2303]: AVC avc: denied { perfmon } for pid=2303 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.230000 audit[2303]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001476b0 a2=3c a3=8 items=0 ppid=2183 pid=2303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:15.230000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238633534343838373061343162666266363433343464623464636335 Nov 1 01:29:15.230000 audit[2303]: AVC avc: denied { bpf } for pid=2303 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.230000 audit[2303]: AVC avc: denied { bpf } for pid=2303 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.230000 audit[2303]: AVC avc: denied { bpf } for pid=2303 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.230000 audit[2303]: AVC avc: denied { perfmon } for pid=2303 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.230000 audit[2303]: AVC avc: denied { perfmon } for pid=2303 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.230000 audit[2303]: AVC avc: denied { perfmon } for pid=2303 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.230000 audit[2303]: AVC avc: denied { perfmon } for pid=2303 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.230000 audit[2303]: AVC avc: denied { perfmon } for pid=2303 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.230000 audit[2303]: AVC avc: denied { bpf } for pid=2303 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.230000 audit[2303]: AVC avc: denied { bpf } for pid=2303 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.230000 audit: BPF prog-id=82 op=LOAD Nov 1 01:29:15.230000 audit[2303]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001479d8 a2=78 a3=c0001e3c90 items=0 ppid=2183 pid=2303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:15.230000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238633534343838373061343162666266363433343464623464636335 Nov 1 01:29:15.230000 audit[2303]: AVC avc: denied { bpf } for pid=2303 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.230000 audit[2303]: AVC avc: denied { bpf } for pid=2303 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.230000 audit[2303]: AVC avc: denied { perfmon } for pid=2303 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.230000 audit[2303]: AVC avc: denied { perfmon } for pid=2303 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.230000 audit[2303]: AVC avc: denied { perfmon } for pid=2303 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.230000 audit[2303]: AVC avc: denied { perfmon } for pid=2303 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.230000 audit[2303]: AVC avc: denied { perfmon } for pid=2303 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.230000 audit[2303]: AVC avc: denied { bpf } for pid=2303 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.230000 audit[2303]: AVC avc: denied { bpf } for pid=2303 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.230000 audit: BPF prog-id=83 op=LOAD Nov 1 01:29:15.230000 audit[2303]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000147770 a2=78 a3=c0001e3cd8 items=0 ppid=2183 pid=2303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:15.230000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238633534343838373061343162666266363433343464623464636335 Nov 1 01:29:15.230000 audit: BPF prog-id=83 op=UNLOAD Nov 1 01:29:15.230000 audit: BPF prog-id=82 op=UNLOAD Nov 1 01:29:15.230000 audit[2303]: AVC avc: denied { bpf } for pid=2303 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.230000 audit[2303]: AVC avc: denied { bpf } for pid=2303 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.230000 audit[2303]: AVC avc: denied { bpf } for pid=2303 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.230000 audit[2303]: AVC avc: denied { perfmon } for pid=2303 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.230000 audit[2303]: AVC avc: denied { perfmon } for pid=2303 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.230000 audit[2303]: AVC avc: denied { perfmon } for pid=2303 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.230000 audit[2303]: AVC avc: denied { perfmon } for pid=2303 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.230000 audit[2303]: AVC avc: denied { perfmon } for pid=2303 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.230000 audit[2303]: AVC avc: denied { bpf } for pid=2303 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.230000 audit[2303]: AVC avc: denied { bpf } for pid=2303 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:15.230000 audit: BPF prog-id=84 op=LOAD Nov 1 01:29:15.230000 audit[2303]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000147c30 a2=78 a3=c0003b00e8 items=0 ppid=2183 pid=2303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:15.230000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238633534343838373061343162666266363433343464623464636335 Nov 1 01:29:15.240781 env[1561]: time="2025-11-01T01:29:15.240752725Z" level=info msg="StartContainer for \"e65bbc682cad9e0bba2d0dfd2fcaa3900c69ca254bc525e5b9b94954163079d6\" returns successfully" Nov 1 01:29:15.260925 env[1561]: time="2025-11-01T01:29:15.260891735Z" level=info msg="StartContainer for \"d3c24cdae91ed482475c7d89356c3ef129c7e4cac2120d8407948711c1088442\" returns successfully" Nov 1 01:29:15.261035 env[1561]: time="2025-11-01T01:29:15.261012010Z" level=info msg="StartContainer for \"28c5448870a41bfbf64344db4dcc5fb56a7484bc0868ba4fe58b68778c948abd\" returns successfully" Nov 1 01:29:15.739874 kubelet[2100]: I1101 01:29:15.739857 2100 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:15.866000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sda9" ino=520989 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:29:15.866000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=8 a1=c000055bf0 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:29:15.866000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:29:15.866000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:29:15.866000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c001316020 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:29:15.866000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:29:15.897000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sda9" ino=520989 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:29:15.897000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:29:15.897000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=45 a1=c005e1e020 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:29:15.897000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=41 a1=c005878000 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:29:15.897000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:29:15.897000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:29:15.897000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="sda9" ino=520985 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:29:15.897000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=41 a1=c005538000 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:29:15.897000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:29:15.898000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="sda9" ino=520991 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:29:15.898000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=47 a1=c005b463c0 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:29:15.898000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:29:15.898000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:29:15.898000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=65 a1=c00629e060 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:29:15.898000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:29:15.898000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sda9" ino=520989 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:29:15.898000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=65 a1=c003e3e180 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:29:15.898000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:29:15.908355 kubelet[2100]: E1101 01:29:15.908331 2100 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.8-n-34cd8b9336\" not found" node="ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:16.008628 kubelet[2100]: I1101 01:29:16.008570 2100 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:16.008628 kubelet[2100]: E1101 01:29:16.008591 2100 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-3510.3.8-n-34cd8b9336\": node \"ci-3510.3.8-n-34cd8b9336\" not found" Nov 1 01:29:16.012640 kubelet[2100]: I1101 01:29:16.012625 2100 apiserver.go:52] "Watching apiserver" Nov 1 01:29:16.015684 kubelet[2100]: I1101 01:29:16.015668 2100 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:16.015766 kubelet[2100]: I1101 01:29:16.015759 2100 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 1 01:29:16.018895 kubelet[2100]: E1101 01:29:16.018881 2100 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-34cd8b9336\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:16.018895 kubelet[2100]: I1101 01:29:16.018894 2100 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:16.019655 kubelet[2100]: E1101 01:29:16.019647 2100 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-n-34cd8b9336\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:16.019682 kubelet[2100]: I1101 01:29:16.019656 2100 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:16.020247 kubelet[2100]: E1101 01:29:16.020239 2100 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-34cd8b9336\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:16.036811 kubelet[2100]: I1101 01:29:16.036799 2100 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:16.037434 kubelet[2100]: I1101 01:29:16.037425 2100 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:16.037730 kubelet[2100]: E1101 01:29:16.037720 2100 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-n-34cd8b9336\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:16.038014 kubelet[2100]: I1101 01:29:16.038008 2100 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:16.038152 kubelet[2100]: E1101 01:29:16.038143 2100 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-34cd8b9336\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:16.038890 kubelet[2100]: E1101 01:29:16.038882 2100 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-34cd8b9336\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:17.041168 kubelet[2100]: I1101 01:29:17.041108 2100 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:17.041996 kubelet[2100]: I1101 01:29:17.041378 2100 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:17.049462 kubelet[2100]: I1101 01:29:17.049382 2100 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 01:29:17.049885 kubelet[2100]: I1101 01:29:17.049802 2100 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 01:29:17.673433 kubelet[2100]: I1101 01:29:17.673358 2100 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:17.681426 kubelet[2100]: I1101 01:29:17.681350 2100 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 01:29:18.642105 systemd[1]: Reloading. Nov 1 01:29:18.671074 /usr/lib/systemd/system-generators/torcx-generator[2440]: time="2025-11-01T01:29:18Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 01:29:18.671091 /usr/lib/systemd/system-generators/torcx-generator[2440]: time="2025-11-01T01:29:18Z" level=info msg="torcx already run" Nov 1 01:29:18.676000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/opt/libexec/kubernetes/kubelet-plugins/volume/exec" dev="sda9" ino=521018 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:usr_t:s0 tclass=dir permissive=0 Nov 1 01:29:18.704291 kernel: kauditd_printk_skb: 581 callbacks suppressed Nov 1 01:29:18.704360 kernel: audit: type=1400 audit(1761960558.676:555): avc: denied { watch } for pid=2324 comm="kube-controller" path="/opt/libexec/kubernetes/kubelet-plugins/volume/exec" dev="sda9" ino=521018 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:usr_t:s0 tclass=dir permissive=0 Nov 1 01:29:18.676000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6 a1=c0005fe6c0 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:29:18.676000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:29:18.692000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:29:18.692000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c000539b20 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:29:18.692000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:29:18.692000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:29:18.692000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c000ac6380 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:29:18.796432 kernel: audit: type=1300 audit(1761960558.676:555): arch=c000003e syscall=254 success=no exit=-13 a0=6 a1=c0005fe6c0 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:29:18.796450 kernel: audit: type=1327 audit(1761960558.676:555): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:29:18.796462 kernel: audit: type=1400 audit(1761960558.692:556): avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:29:18.796481 kernel: audit: type=1300 audit(1761960558.692:556): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c000539b20 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:29:18.796498 kernel: audit: type=1327 audit(1761960558.692:556): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:29:18.796514 kernel: audit: type=1400 audit(1761960558.692:557): avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:29:18.796526 kernel: audit: type=1300 audit(1761960558.692:557): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c000ac6380 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:29:18.796542 kernel: audit: type=1327 audit(1761960558.692:557): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:29:18.796557 kernel: audit: type=1400 audit(1761960558.692:558): avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:29:18.692000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:29:18.692000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:29:18.692000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c000d9e020 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:29:18.692000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:29:18.692000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:29:18.692000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c000539e60 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:29:18.692000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:29:19.730319 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 01:29:19.730328 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 01:29:19.741720 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 01:29:19.795000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.795000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.795000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.795000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.795000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.795000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.795000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.795000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.795000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.795000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.795000 audit: BPF prog-id=85 op=LOAD Nov 1 01:29:19.795000 audit: BPF prog-id=77 op=UNLOAD Nov 1 01:29:19.795000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.795000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.795000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.795000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.795000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.795000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.795000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.795000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.795000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.796000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.796000 audit: BPF prog-id=86 op=LOAD Nov 1 01:29:19.796000 audit: BPF prog-id=46 op=UNLOAD Nov 1 01:29:19.796000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.796000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.796000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.796000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.796000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.796000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.796000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.796000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.797000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.797000 audit: BPF prog-id=87 op=LOAD Nov 1 01:29:19.797000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.797000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.797000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.797000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.797000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.797000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.797000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.797000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.797000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.797000 audit: BPF prog-id=88 op=LOAD Nov 1 01:29:19.797000 audit: BPF prog-id=47 op=UNLOAD Nov 1 01:29:19.797000 audit: BPF prog-id=48 op=UNLOAD Nov 1 01:29:19.797000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.797000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.797000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.797000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.797000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.797000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.797000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.797000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.797000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.797000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.797000 audit: BPF prog-id=89 op=LOAD Nov 1 01:29:19.797000 audit: BPF prog-id=69 op=UNLOAD Nov 1 01:29:19.798000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.798000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.798000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.798000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.798000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.798000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.798000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.798000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.798000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.798000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.798000 audit: BPF prog-id=90 op=LOAD Nov 1 01:29:19.798000 audit: BPF prog-id=49 op=UNLOAD Nov 1 01:29:19.799000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.799000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.799000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.799000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.799000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.799000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.799000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.799000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.799000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.799000 audit: BPF prog-id=91 op=LOAD Nov 1 01:29:19.799000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.799000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.799000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.799000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.799000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.799000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.799000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.799000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.799000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.799000 audit: BPF prog-id=92 op=LOAD Nov 1 01:29:19.799000 audit: BPF prog-id=50 op=UNLOAD Nov 1 01:29:19.799000 audit: BPF prog-id=51 op=UNLOAD Nov 1 01:29:19.800000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.800000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.800000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.800000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.800000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.800000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.800000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.800000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.800000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.800000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.800000 audit: BPF prog-id=93 op=LOAD Nov 1 01:29:19.800000 audit: BPF prog-id=52 op=UNLOAD Nov 1 01:29:19.800000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.800000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.800000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.800000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.800000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.800000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.800000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.800000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.800000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.800000 audit: BPF prog-id=94 op=LOAD Nov 1 01:29:19.800000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.800000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.800000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.800000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.800000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.800000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.800000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.800000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.800000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.800000 audit: BPF prog-id=95 op=LOAD Nov 1 01:29:19.800000 audit: BPF prog-id=53 op=UNLOAD Nov 1 01:29:19.800000 audit: BPF prog-id=54 op=UNLOAD Nov 1 01:29:19.800000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.800000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.800000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.800000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.800000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.800000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.800000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.800000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.800000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.801000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.801000 audit: BPF prog-id=96 op=LOAD Nov 1 01:29:19.801000 audit: BPF prog-id=55 op=UNLOAD Nov 1 01:29:19.801000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.801000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.801000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.801000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.801000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.801000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.801000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.801000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.801000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.801000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.801000 audit: BPF prog-id=97 op=LOAD Nov 1 01:29:19.801000 audit: BPF prog-id=56 op=UNLOAD Nov 1 01:29:19.802000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.802000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.802000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.802000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.802000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.802000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.802000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.802000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.802000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.802000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.802000 audit: BPF prog-id=98 op=LOAD Nov 1 01:29:19.802000 audit: BPF prog-id=57 op=UNLOAD Nov 1 01:29:19.802000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.802000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.802000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.802000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.802000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.802000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.802000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.802000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.802000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.802000 audit: BPF prog-id=99 op=LOAD Nov 1 01:29:19.802000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.802000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.802000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.802000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.802000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.802000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.802000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.802000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.802000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.802000 audit: BPF prog-id=100 op=LOAD Nov 1 01:29:19.802000 audit: BPF prog-id=58 op=UNLOAD Nov 1 01:29:19.802000 audit: BPF prog-id=59 op=UNLOAD Nov 1 01:29:19.803000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.803000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.803000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.803000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.803000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.803000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.803000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.803000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.803000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.803000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.803000 audit: BPF prog-id=101 op=LOAD Nov 1 01:29:19.803000 audit: BPF prog-id=65 op=UNLOAD Nov 1 01:29:19.803000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.803000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.803000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.803000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.803000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.803000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.803000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.803000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.803000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.803000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.803000 audit: BPF prog-id=102 op=LOAD Nov 1 01:29:19.803000 audit: BPF prog-id=81 op=UNLOAD Nov 1 01:29:19.804000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.804000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.804000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.804000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.804000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.804000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.804000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.804000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.804000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.804000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.804000 audit: BPF prog-id=103 op=LOAD Nov 1 01:29:19.804000 audit: BPF prog-id=60 op=UNLOAD Nov 1 01:29:19.804000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.804000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.805000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.805000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.805000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.805000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.805000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.805000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.805000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.805000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.805000 audit: BPF prog-id=104 op=LOAD Nov 1 01:29:19.805000 audit: BPF prog-id=73 op=UNLOAD Nov 1 01:29:19.806000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.806000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.806000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.806000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.806000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.806000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.806000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.806000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.806000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.806000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:19.806000 audit: BPF prog-id=105 op=LOAD Nov 1 01:29:19.806000 audit: BPF prog-id=61 op=UNLOAD Nov 1 01:29:19.813581 systemd[1]: Stopping kubelet.service... Nov 1 01:29:19.843126 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 01:29:19.843612 systemd[1]: Stopped kubelet.service. Nov 1 01:29:19.842000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:29:19.847478 systemd[1]: Starting kubelet.service... Nov 1 01:29:20.092020 systemd[1]: Started kubelet.service. Nov 1 01:29:20.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:29:20.113301 kubelet[2505]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 01:29:20.113301 kubelet[2505]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 01:29:20.113512 kubelet[2505]: I1101 01:29:20.113337 2505 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 01:29:20.116950 kubelet[2505]: I1101 01:29:20.116911 2505 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 1 01:29:20.116950 kubelet[2505]: I1101 01:29:20.116921 2505 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 01:29:20.116950 kubelet[2505]: I1101 01:29:20.116936 2505 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 1 01:29:20.116950 kubelet[2505]: I1101 01:29:20.116941 2505 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 01:29:20.117062 kubelet[2505]: I1101 01:29:20.117051 2505 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 01:29:20.117732 kubelet[2505]: I1101 01:29:20.117696 2505 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 1 01:29:20.119638 kubelet[2505]: I1101 01:29:20.119621 2505 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 01:29:20.120873 kubelet[2505]: E1101 01:29:20.120858 2505 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 01:29:20.120918 kubelet[2505]: I1101 01:29:20.120885 2505 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 1 01:29:20.140168 kubelet[2505]: I1101 01:29:20.140148 2505 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 1 01:29:20.140307 kubelet[2505]: I1101 01:29:20.140289 2505 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 01:29:20.140462 kubelet[2505]: I1101 01:29:20.140308 2505 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-34cd8b9336","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 01:29:20.140462 kubelet[2505]: I1101 01:29:20.140424 2505 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 01:29:20.140462 kubelet[2505]: I1101 01:29:20.140432 2505 container_manager_linux.go:306] "Creating device plugin manager" Nov 1 01:29:20.140462 kubelet[2505]: I1101 01:29:20.140449 2505 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 1 01:29:20.140906 kubelet[2505]: I1101 01:29:20.140894 2505 state_mem.go:36] "Initialized new in-memory state store" Nov 1 01:29:20.141114 kubelet[2505]: I1101 01:29:20.141064 2505 kubelet.go:475] "Attempting to sync node with API server" Nov 1 01:29:20.141114 kubelet[2505]: I1101 01:29:20.141080 2505 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 01:29:20.141114 kubelet[2505]: I1101 01:29:20.141097 2505 kubelet.go:387] "Adding apiserver pod source" Nov 1 01:29:20.141114 kubelet[2505]: I1101 01:29:20.141109 2505 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 01:29:20.141863 kubelet[2505]: I1101 01:29:20.141841 2505 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 01:29:20.142606 kubelet[2505]: I1101 01:29:20.142593 2505 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 01:29:20.142669 kubelet[2505]: I1101 01:29:20.142618 2505 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 1 01:29:20.143868 kubelet[2505]: I1101 01:29:20.143858 2505 server.go:1262] "Started kubelet" Nov 1 01:29:20.143926 kubelet[2505]: I1101 01:29:20.143896 2505 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 01:29:20.144270 kubelet[2505]: I1101 01:29:20.143968 2505 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 01:29:20.144270 kubelet[2505]: I1101 01:29:20.144037 2505 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 1 01:29:20.144270 kubelet[2505]: I1101 01:29:20.144241 2505 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 01:29:20.142000 audit[2505]: AVC avc: denied { mac_admin } for pid=2505 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:20.142000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 01:29:20.142000 audit[2505]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000bba510 a1=c000f80480 a2=c000bba4e0 a3=25 items=0 ppid=1 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:20.142000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 01:29:20.143000 audit[2505]: AVC avc: denied { mac_admin } for pid=2505 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:20.143000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 01:29:20.143000 audit[2505]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000fb64c0 a1=c000f80498 a2=c000bba5a0 a3=25 items=0 ppid=1 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:20.143000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 01:29:20.144751 kubelet[2505]: I1101 01:29:20.144390 2505 kubelet.go:1567] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins_registry: invalid argument" Nov 1 01:29:20.144751 kubelet[2505]: I1101 01:29:20.144444 2505 kubelet.go:1571] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins: invalid argument" Nov 1 01:29:20.144751 kubelet[2505]: I1101 01:29:20.144475 2505 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 01:29:20.144751 kubelet[2505]: I1101 01:29:20.144606 2505 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 01:29:20.144751 kubelet[2505]: I1101 01:29:20.144696 2505 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 1 01:29:20.144930 kubelet[2505]: I1101 01:29:20.144771 2505 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 1 01:29:20.144930 kubelet[2505]: I1101 01:29:20.144873 2505 reconciler.go:29] "Reconciler: start to sync state" Nov 1 01:29:20.145031 kubelet[2505]: I1101 01:29:20.144973 2505 server.go:310] "Adding debug handlers to kubelet server" Nov 1 01:29:20.145199 kubelet[2505]: E1101 01:29:20.145178 2505 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-34cd8b9336\" not found" Nov 1 01:29:20.145249 kubelet[2505]: I1101 01:29:20.145204 2505 factory.go:223] Registration of the systemd container factory successfully Nov 1 01:29:20.145289 kubelet[2505]: I1101 01:29:20.145276 2505 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 01:29:20.145784 kubelet[2505]: E1101 01:29:20.145769 2505 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 01:29:20.145923 kubelet[2505]: I1101 01:29:20.145912 2505 factory.go:223] Registration of the containerd container factory successfully Nov 1 01:29:20.151946 kubelet[2505]: I1101 01:29:20.151926 2505 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 1 01:29:20.152486 kubelet[2505]: I1101 01:29:20.152465 2505 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 1 01:29:20.152486 kubelet[2505]: I1101 01:29:20.152477 2505 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 1 01:29:20.152583 kubelet[2505]: I1101 01:29:20.152495 2505 kubelet.go:2427] "Starting kubelet main sync loop" Nov 1 01:29:20.152583 kubelet[2505]: E1101 01:29:20.152520 2505 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 01:29:20.160754 kubelet[2505]: I1101 01:29:20.160739 2505 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 01:29:20.160754 kubelet[2505]: I1101 01:29:20.160750 2505 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 01:29:20.160754 kubelet[2505]: I1101 01:29:20.160760 2505 state_mem.go:36] "Initialized new in-memory state store" Nov 1 01:29:20.160861 kubelet[2505]: I1101 01:29:20.160829 2505 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 01:29:20.160861 kubelet[2505]: I1101 01:29:20.160836 2505 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 01:29:20.160861 kubelet[2505]: I1101 01:29:20.160847 2505 policy_none.go:49] "None policy: Start" Nov 1 01:29:20.160861 kubelet[2505]: I1101 01:29:20.160852 2505 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 1 01:29:20.160861 kubelet[2505]: I1101 01:29:20.160857 2505 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 1 01:29:20.160960 kubelet[2505]: I1101 01:29:20.160913 2505 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 1 01:29:20.160960 kubelet[2505]: I1101 01:29:20.160918 2505 policy_none.go:47] "Start" Nov 1 01:29:20.162602 kubelet[2505]: E1101 01:29:20.162591 2505 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 01:29:20.161000 audit[2505]: AVC avc: denied { mac_admin } for pid=2505 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:20.161000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 01:29:20.161000 audit[2505]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00104de00 a1=c001517068 a2=c00104ddd0 a3=25 items=0 ppid=1 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:20.161000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 01:29:20.162787 kubelet[2505]: E1101 01:29:20.162636 2505 server.go:96] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/device-plugins/: invalid argument" path="/var/lib/kubelet/device-plugins/" Nov 1 01:29:20.162787 kubelet[2505]: I1101 01:29:20.162714 2505 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 01:29:20.162787 kubelet[2505]: I1101 01:29:20.162721 2505 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 01:29:20.162850 kubelet[2505]: I1101 01:29:20.162816 2505 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 01:29:20.163229 kubelet[2505]: E1101 01:29:20.163214 2505 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 01:29:20.254755 kubelet[2505]: I1101 01:29:20.254696 2505 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:20.255039 kubelet[2505]: I1101 01:29:20.254977 2505 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:20.255167 kubelet[2505]: I1101 01:29:20.254977 2505 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:20.264392 kubelet[2505]: I1101 01:29:20.264298 2505 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 01:29:20.264618 kubelet[2505]: E1101 01:29:20.264453 2505 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-34cd8b9336\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:20.264785 kubelet[2505]: I1101 01:29:20.264728 2505 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 01:29:20.264917 kubelet[2505]: E1101 01:29:20.264813 2505 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-n-34cd8b9336\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:20.264917 kubelet[2505]: I1101 01:29:20.264823 2505 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 01:29:20.265109 kubelet[2505]: E1101 01:29:20.264936 2505 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-34cd8b9336\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:20.267435 kubelet[2505]: I1101 01:29:20.267372 2505 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:20.279170 kubelet[2505]: I1101 01:29:20.279115 2505 kubelet_node_status.go:124] "Node was previously registered" node="ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:20.279460 kubelet[2505]: I1101 01:29:20.279262 2505 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:20.446548 kubelet[2505]: I1101 01:29:20.446510 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/733ca44bab5ecc27704211903449df05-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-34cd8b9336\" (UID: \"733ca44bab5ecc27704211903449df05\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:20.446548 kubelet[2505]: I1101 01:29:20.446547 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c089f6f004a1077f804153ebb6f7d803-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-34cd8b9336\" (UID: \"c089f6f004a1077f804153ebb6f7d803\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:20.446643 kubelet[2505]: I1101 01:29:20.446559 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c089f6f004a1077f804153ebb6f7d803-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-34cd8b9336\" (UID: \"c089f6f004a1077f804153ebb6f7d803\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:20.446643 kubelet[2505]: I1101 01:29:20.446581 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c089f6f004a1077f804153ebb6f7d803-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-34cd8b9336\" (UID: \"c089f6f004a1077f804153ebb6f7d803\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:20.446643 kubelet[2505]: I1101 01:29:20.446598 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c089f6f004a1077f804153ebb6f7d803-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-34cd8b9336\" (UID: \"c089f6f004a1077f804153ebb6f7d803\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:20.446643 kubelet[2505]: I1101 01:29:20.446613 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/733ca44bab5ecc27704211903449df05-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-34cd8b9336\" (UID: \"733ca44bab5ecc27704211903449df05\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:20.446643 kubelet[2505]: I1101 01:29:20.446625 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/733ca44bab5ecc27704211903449df05-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-34cd8b9336\" (UID: \"733ca44bab5ecc27704211903449df05\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:20.446766 kubelet[2505]: I1101 01:29:20.446637 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c089f6f004a1077f804153ebb6f7d803-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-34cd8b9336\" (UID: \"c089f6f004a1077f804153ebb6f7d803\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:20.446766 kubelet[2505]: I1101 01:29:20.446653 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fb0002f3582128fc4d54a121b245e19e-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-34cd8b9336\" (UID: \"fb0002f3582128fc4d54a121b245e19e\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:21.142158 kubelet[2505]: I1101 01:29:21.142047 2505 apiserver.go:52] "Watching apiserver" Nov 1 01:29:21.158419 kubelet[2505]: I1101 01:29:21.158367 2505 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:21.158741 kubelet[2505]: I1101 01:29:21.158661 2505 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:21.158916 kubelet[2505]: I1101 01:29:21.158816 2505 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:21.165846 kubelet[2505]: I1101 01:29:21.165797 2505 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 01:29:21.166039 kubelet[2505]: E1101 01:29:21.165903 2505 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-34cd8b9336\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:21.166901 kubelet[2505]: I1101 01:29:21.166846 2505 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 01:29:21.166901 kubelet[2505]: I1101 01:29:21.166846 2505 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 01:29:21.167270 kubelet[2505]: E1101 01:29:21.167077 2505 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-34cd8b9336\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:21.167270 kubelet[2505]: E1101 01:29:21.167093 2505 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-n-34cd8b9336\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:21.200758 kubelet[2505]: I1101 01:29:21.200698 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.8-n-34cd8b9336" podStartSLOduration=4.200680889 podStartE2EDuration="4.200680889s" podCreationTimestamp="2025-11-01 01:29:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:29:21.200591329 +0000 UTC m=+1.106403556" watchObservedRunningTime="2025-11-01 01:29:21.200680889 +0000 UTC m=+1.106493094" Nov 1 01:29:21.218443 kubelet[2505]: I1101 01:29:21.218389 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.8-n-34cd8b9336" podStartSLOduration=4.218374648 podStartE2EDuration="4.218374648s" podCreationTimestamp="2025-11-01 01:29:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:29:21.218315614 +0000 UTC m=+1.124127812" watchObservedRunningTime="2025-11-01 01:29:21.218374648 +0000 UTC m=+1.124186847" Nov 1 01:29:21.218615 kubelet[2505]: I1101 01:29:21.218473 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-34cd8b9336" podStartSLOduration=4.218468189 podStartE2EDuration="4.218468189s" podCreationTimestamp="2025-11-01 01:29:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:29:21.210367785 +0000 UTC m=+1.116179984" watchObservedRunningTime="2025-11-01 01:29:21.218468189 +0000 UTC m=+1.124280390" Nov 1 01:29:21.245349 kubelet[2505]: I1101 01:29:21.245288 2505 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 1 01:29:23.733221 kubelet[2505]: I1101 01:29:23.733138 2505 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 01:29:23.734197 env[1561]: time="2025-11-01T01:29:23.733963888Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 01:29:23.734886 kubelet[2505]: I1101 01:29:23.734340 2505 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 01:29:24.806891 systemd[1]: Created slice kubepods-besteffort-podfba7a568_a97f_4e57_b503_7f4f984a684b.slice. Nov 1 01:29:24.875739 kubelet[2505]: I1101 01:29:24.875644 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fba7a568-a97f-4e57-b503-7f4f984a684b-lib-modules\") pod \"kube-proxy-l5whp\" (UID: \"fba7a568-a97f-4e57-b503-7f4f984a684b\") " pod="kube-system/kube-proxy-l5whp" Nov 1 01:29:24.876608 kubelet[2505]: I1101 01:29:24.875756 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fba7a568-a97f-4e57-b503-7f4f984a684b-kube-proxy\") pod \"kube-proxy-l5whp\" (UID: \"fba7a568-a97f-4e57-b503-7f4f984a684b\") " pod="kube-system/kube-proxy-l5whp" Nov 1 01:29:24.876608 kubelet[2505]: I1101 01:29:24.875810 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lp7m\" (UniqueName: \"kubernetes.io/projected/fba7a568-a97f-4e57-b503-7f4f984a684b-kube-api-access-6lp7m\") pod \"kube-proxy-l5whp\" (UID: \"fba7a568-a97f-4e57-b503-7f4f984a684b\") " pod="kube-system/kube-proxy-l5whp" Nov 1 01:29:24.876608 kubelet[2505]: I1101 01:29:24.875863 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fba7a568-a97f-4e57-b503-7f4f984a684b-xtables-lock\") pod \"kube-proxy-l5whp\" (UID: \"fba7a568-a97f-4e57-b503-7f4f984a684b\") " pod="kube-system/kube-proxy-l5whp" Nov 1 01:29:24.955154 systemd[1]: Created slice kubepods-besteffort-pod7f19b21b_9941_47ec_bc44_ef647ac816b6.slice. Nov 1 01:29:24.976934 kubelet[2505]: I1101 01:29:24.976880 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7f19b21b-9941-47ec-bc44-ef647ac816b6-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-v4r7p\" (UID: \"7f19b21b-9941-47ec-bc44-ef647ac816b6\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-v4r7p" Nov 1 01:29:24.977320 kubelet[2505]: I1101 01:29:24.977261 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swgdx\" (UniqueName: \"kubernetes.io/projected/7f19b21b-9941-47ec-bc44-ef647ac816b6-kube-api-access-swgdx\") pod \"tigera-operator-65cdcdfd6d-v4r7p\" (UID: \"7f19b21b-9941-47ec-bc44-ef647ac816b6\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-v4r7p" Nov 1 01:29:24.979576 update_engine[1555]: I1101 01:29:24.979528 1555 update_attempter.cc:509] Updating boot flags... Nov 1 01:29:24.987014 kubelet[2505]: I1101 01:29:24.986973 2505 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 1 01:29:25.128009 env[1561]: time="2025-11-01T01:29:25.127816423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l5whp,Uid:fba7a568-a97f-4e57-b503-7f4f984a684b,Namespace:kube-system,Attempt:0,}" Nov 1 01:29:25.151142 env[1561]: time="2025-11-01T01:29:25.150907261Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:29:25.151142 env[1561]: time="2025-11-01T01:29:25.151011882Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:29:25.151142 env[1561]: time="2025-11-01T01:29:25.151052355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:29:25.151694 env[1561]: time="2025-11-01T01:29:25.151371239Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/45709dc512f5044139981bd84c97b789d146ce2b93651e7a6ddf5555ff0c50eb pid=2617 runtime=io.containerd.runc.v2 Nov 1 01:29:25.184283 systemd[1]: Started cri-containerd-45709dc512f5044139981bd84c97b789d146ce2b93651e7a6ddf5555ff0c50eb.scope. Nov 1 01:29:25.195000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.224841 kernel: kauditd_printk_skb: 263 callbacks suppressed Nov 1 01:29:25.224893 kernel: audit: type=1400 audit(1761960565.195:809): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.261079 env[1561]: time="2025-11-01T01:29:25.261052841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-v4r7p,Uid:7f19b21b-9941-47ec-bc44-ef647ac816b6,Namespace:tigera-operator,Attempt:0,}" Nov 1 01:29:25.267557 env[1561]: time="2025-11-01T01:29:25.267516932Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:29:25.267557 env[1561]: time="2025-11-01T01:29:25.267539665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:29:25.267557 env[1561]: time="2025-11-01T01:29:25.267546796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:29:25.267730 env[1561]: time="2025-11-01T01:29:25.267608132Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8d440c9ac29f13f6e22b1a6bfdc7b6acdefc3cce877c13d6b9abfc24f204249c pid=2651 runtime=io.containerd.runc.v2 Nov 1 01:29:25.195000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.351361 kernel: audit: type=1400 audit(1761960565.195:810): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.351420 kernel: audit: type=1400 audit(1761960565.195:811): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.195000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.352690 systemd[1]: Started cri-containerd-8d440c9ac29f13f6e22b1a6bfdc7b6acdefc3cce877c13d6b9abfc24f204249c.scope. Nov 1 01:29:25.414703 kernel: audit: type=1400 audit(1761960565.195:812): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.195000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.478311 kernel: audit: type=1400 audit(1761960565.195:813): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.195000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.541891 kernel: audit: type=1400 audit(1761960565.195:814): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.195000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.605452 kernel: audit: type=1400 audit(1761960565.195:815): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.195000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.669324 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Nov 1 01:29:25.669355 kernel: audit: type=1400 audit(1761960565.195:816): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.195000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.696364 kernel: audit: audit_lost=1 audit_rate_limit=0 audit_backlog_limit=64 Nov 1 01:29:25.195000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.349000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.349000 audit: BPF prog-id=106 op=LOAD Nov 1 01:29:25.350000 audit[2627]: AVC avc: denied { bpf } for pid=2627 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.350000 audit[2627]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=2617 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:25.350000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435373039646335313266353034343133393938316264383463393762 Nov 1 01:29:25.350000 audit[2627]: AVC avc: denied { perfmon } for pid=2627 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.350000 audit[2627]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=2617 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:25.350000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435373039646335313266353034343133393938316264383463393762 Nov 1 01:29:25.350000 audit[2627]: AVC avc: denied { bpf } for pid=2627 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.350000 audit[2627]: AVC avc: denied { bpf } for pid=2627 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.350000 audit[2627]: AVC avc: denied { bpf } for pid=2627 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.350000 audit[2627]: AVC avc: denied { perfmon } for pid=2627 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.350000 audit[2627]: AVC avc: denied { perfmon } for pid=2627 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.350000 audit[2627]: AVC avc: denied { perfmon } for pid=2627 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.350000 audit[2627]: AVC avc: denied { perfmon } for pid=2627 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.350000 audit[2627]: AVC avc: denied { perfmon } for pid=2627 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.350000 audit[2627]: AVC avc: denied { bpf } for pid=2627 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.355000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.355000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.355000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.355000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.355000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.355000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.355000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.355000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.355000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.350000 audit[2627]: AVC avc: denied { bpf } for pid=2627 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.350000 audit: BPF prog-id=107 op=LOAD Nov 1 01:29:25.350000 audit[2627]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001979d8 a2=78 a3=c00020a550 items=0 ppid=2617 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:25.350000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435373039646335313266353034343133393938316264383463393762 Nov 1 01:29:25.476000 audit[2627]: AVC avc: denied { bpf } for pid=2627 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.476000 audit[2627]: AVC avc: denied { bpf } for pid=2627 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.476000 audit[2627]: AVC avc: denied { perfmon } for pid=2627 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.476000 audit[2627]: AVC avc: denied { perfmon } for pid=2627 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.476000 audit[2627]: AVC avc: denied { perfmon } for pid=2627 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.476000 audit[2627]: AVC avc: denied { perfmon } for pid=2627 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.476000 audit[2627]: AVC avc: denied { perfmon } for pid=2627 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.476000 audit[2627]: AVC avc: denied { bpf } for pid=2627 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.540000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.540000 audit: BPF prog-id=108 op=LOAD Nov 1 01:29:25.540000 audit[2660]: AVC avc: denied { bpf } for pid=2660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.540000 audit[2660]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=2651 pid=2660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:25.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864343430633961633239663133663665323262316136626664633762 Nov 1 01:29:25.540000 audit[2660]: AVC avc: denied { perfmon } for pid=2660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.540000 audit[2660]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=2651 pid=2660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:25.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864343430633961633239663133663665323262316136626664633762 Nov 1 01:29:25.540000 audit[2660]: AVC avc: denied { bpf } for pid=2660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.540000 audit[2660]: AVC avc: denied { bpf } for pid=2660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.540000 audit[2660]: AVC avc: denied { bpf } for pid=2660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.540000 audit[2660]: AVC avc: denied { perfmon } for pid=2660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.540000 audit[2660]: AVC avc: denied { perfmon } for pid=2660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.540000 audit[2660]: AVC avc: denied { perfmon } for pid=2660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.540000 audit[2660]: AVC avc: denied { perfmon } for pid=2660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.540000 audit[2660]: AVC avc: denied { perfmon } for pid=2660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.540000 audit[2660]: AVC avc: denied { bpf } for pid=2660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.476000 audit[2627]: AVC avc: denied { bpf } for pid=2627 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.476000 audit: BPF prog-id=109 op=LOAD Nov 1 01:29:25.540000 audit[2660]: AVC avc: denied { bpf } for pid=2660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.476000 audit[2627]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000197770 a2=78 a3=c00020a598 items=0 ppid=2617 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:25.476000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435373039646335313266353034343133393938316264383463393762 Nov 1 01:29:25.758000 audit: BPF prog-id=109 op=UNLOAD Nov 1 01:29:25.758000 audit: BPF prog-id=107 op=UNLOAD Nov 1 01:29:25.758000 audit[2627]: AVC avc: denied { bpf } for pid=2627 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.758000 audit[2627]: AVC avc: denied { bpf } for pid=2627 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.758000 audit[2627]: AVC avc: denied { bpf } for pid=2627 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.758000 audit[2627]: AVC avc: denied { perfmon } for pid=2627 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.758000 audit[2627]: AVC avc: denied { perfmon } for pid=2627 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.758000 audit[2627]: AVC avc: denied { perfmon } for pid=2627 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.758000 audit[2627]: AVC avc: denied { perfmon } for pid=2627 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.758000 audit[2627]: AVC avc: denied { perfmon } for pid=2627 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.758000 audit[2627]: AVC avc: denied { bpf } for pid=2627 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.540000 audit[2660]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c0000987a0 items=0 ppid=2651 pid=2660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:25.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864343430633961633239663133663665323262316136626664633762 Nov 1 01:29:25.786000 audit[2660]: AVC avc: denied { bpf } for pid=2660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.786000 audit[2660]: AVC avc: denied { bpf } for pid=2660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.786000 audit[2660]: AVC avc: denied { perfmon } for pid=2660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.786000 audit[2660]: AVC avc: denied { perfmon } for pid=2660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.786000 audit[2660]: AVC avc: denied { perfmon } for pid=2660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.786000 audit[2660]: AVC avc: denied { perfmon } for pid=2660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.786000 audit[2660]: AVC avc: denied { perfmon } for pid=2660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.786000 audit[2660]: AVC avc: denied { bpf } for pid=2660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.758000 audit[2627]: AVC avc: denied { bpf } for pid=2627 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.758000 audit: BPF prog-id=111 op=LOAD Nov 1 01:29:25.758000 audit[2627]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000197c30 a2=78 a3=c00020a9a8 items=0 ppid=2617 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:25.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435373039646335313266353034343133393938316264383463393762 Nov 1 01:29:25.786000 audit[2660]: AVC avc: denied { bpf } for pid=2660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.786000 audit: BPF prog-id=112 op=LOAD Nov 1 01:29:25.786000 audit[2660]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c0000987e8 items=0 ppid=2651 pid=2660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:25.786000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864343430633961633239663133663665323262316136626664633762 Nov 1 01:29:25.786000 audit: BPF prog-id=112 op=UNLOAD Nov 1 01:29:25.786000 audit: BPF prog-id=110 op=UNLOAD Nov 1 01:29:25.786000 audit[2660]: AVC avc: denied { bpf } for pid=2660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.786000 audit[2660]: AVC avc: denied { bpf } for pid=2660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.786000 audit[2660]: AVC avc: denied { bpf } for pid=2660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.786000 audit[2660]: AVC avc: denied { perfmon } for pid=2660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.786000 audit[2660]: AVC avc: denied { perfmon } for pid=2660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.786000 audit[2660]: AVC avc: denied { perfmon } for pid=2660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.786000 audit[2660]: AVC avc: denied { perfmon } for pid=2660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.786000 audit[2660]: AVC avc: denied { perfmon } for pid=2660 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.786000 audit[2660]: AVC avc: denied { bpf } for pid=2660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.786000 audit[2660]: AVC avc: denied { bpf } for pid=2660 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.786000 audit: BPF prog-id=113 op=LOAD Nov 1 01:29:25.786000 audit[2660]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c000098bf8 items=0 ppid=2651 pid=2660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:25.786000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864343430633961633239663133663665323262316136626664633762 Nov 1 01:29:25.792508 env[1561]: time="2025-11-01T01:29:25.792482943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l5whp,Uid:fba7a568-a97f-4e57-b503-7f4f984a684b,Namespace:kube-system,Attempt:0,} returns sandbox id \"45709dc512f5044139981bd84c97b789d146ce2b93651e7a6ddf5555ff0c50eb\"" Nov 1 01:29:25.794717 env[1561]: time="2025-11-01T01:29:25.794702521Z" level=info msg="CreateContainer within sandbox \"45709dc512f5044139981bd84c97b789d146ce2b93651e7a6ddf5555ff0c50eb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 01:29:25.799825 env[1561]: time="2025-11-01T01:29:25.799776928Z" level=info msg="CreateContainer within sandbox \"45709dc512f5044139981bd84c97b789d146ce2b93651e7a6ddf5555ff0c50eb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e9f17b2569cfd9fbbddd69b772f438ff61b4c1dbd1cb1f11c0de831551eaed69\"" Nov 1 01:29:25.800404 env[1561]: time="2025-11-01T01:29:25.800029498Z" level=info msg="StartContainer for \"e9f17b2569cfd9fbbddd69b772f438ff61b4c1dbd1cb1f11c0de831551eaed69\"" Nov 1 01:29:25.804732 env[1561]: time="2025-11-01T01:29:25.804706531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-v4r7p,Uid:7f19b21b-9941-47ec-bc44-ef647ac816b6,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8d440c9ac29f13f6e22b1a6bfdc7b6acdefc3cce877c13d6b9abfc24f204249c\"" Nov 1 01:29:25.805578 env[1561]: time="2025-11-01T01:29:25.805562416Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 1 01:29:25.807740 systemd[1]: Started cri-containerd-e9f17b2569cfd9fbbddd69b772f438ff61b4c1dbd1cb1f11c0de831551eaed69.scope. Nov 1 01:29:25.814000 audit[2698]: AVC avc: denied { perfmon } for pid=2698 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.814000 audit[2698]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=7f1d8e27b2c8 items=0 ppid=2617 pid=2698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:25.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539663137623235363963666439666262646464363962373732663433 Nov 1 01:29:25.814000 audit[2698]: AVC avc: denied { bpf } for pid=2698 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.814000 audit[2698]: AVC avc: denied { bpf } for pid=2698 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.814000 audit[2698]: AVC avc: denied { bpf } for pid=2698 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.814000 audit[2698]: AVC avc: denied { perfmon } for pid=2698 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.814000 audit[2698]: AVC avc: denied { perfmon } for pid=2698 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.814000 audit[2698]: AVC avc: denied { perfmon } for pid=2698 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.814000 audit[2698]: AVC avc: denied { perfmon } for pid=2698 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.814000 audit[2698]: AVC avc: denied { perfmon } for pid=2698 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.814000 audit[2698]: AVC avc: denied { bpf } for pid=2698 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.814000 audit[2698]: AVC avc: denied { bpf } for pid=2698 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.814000 audit: BPF prog-id=114 op=LOAD Nov 1 01:29:25.814000 audit[2698]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001979d8 a2=78 a3=c0000f3c58 items=0 ppid=2617 pid=2698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:25.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539663137623235363963666439666262646464363962373732663433 Nov 1 01:29:25.815000 audit[2698]: AVC avc: denied { bpf } for pid=2698 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.815000 audit[2698]: AVC avc: denied { bpf } for pid=2698 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.815000 audit[2698]: AVC avc: denied { perfmon } for pid=2698 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.815000 audit[2698]: AVC avc: denied { perfmon } for pid=2698 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.815000 audit[2698]: AVC avc: denied { perfmon } for pid=2698 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.815000 audit[2698]: AVC avc: denied { perfmon } for pid=2698 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.815000 audit[2698]: AVC avc: denied { perfmon } for pid=2698 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.815000 audit[2698]: AVC avc: denied { bpf } for pid=2698 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.815000 audit[2698]: AVC avc: denied { bpf } for pid=2698 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.815000 audit: BPF prog-id=115 op=LOAD Nov 1 01:29:25.815000 audit[2698]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000197770 a2=78 a3=c0000f3ca8 items=0 ppid=2617 pid=2698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:25.815000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539663137623235363963666439666262646464363962373732663433 Nov 1 01:29:25.815000 audit: BPF prog-id=115 op=UNLOAD Nov 1 01:29:25.815000 audit: BPF prog-id=114 op=UNLOAD Nov 1 01:29:25.815000 audit[2698]: AVC avc: denied { bpf } for pid=2698 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.815000 audit[2698]: AVC avc: denied { bpf } for pid=2698 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.815000 audit[2698]: AVC avc: denied { bpf } for pid=2698 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.815000 audit[2698]: AVC avc: denied { perfmon } for pid=2698 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.815000 audit[2698]: AVC avc: denied { perfmon } for pid=2698 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.815000 audit[2698]: AVC avc: denied { perfmon } for pid=2698 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.815000 audit[2698]: AVC avc: denied { perfmon } for pid=2698 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.815000 audit[2698]: AVC avc: denied { perfmon } for pid=2698 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.815000 audit[2698]: AVC avc: denied { bpf } for pid=2698 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.815000 audit[2698]: AVC avc: denied { bpf } for pid=2698 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:25.815000 audit: BPF prog-id=116 op=LOAD Nov 1 01:29:25.815000 audit[2698]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000197c30 a2=78 a3=c0000f3d38 items=0 ppid=2617 pid=2698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:25.815000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539663137623235363963666439666262646464363962373732663433 Nov 1 01:29:25.821912 env[1561]: time="2025-11-01T01:29:25.821888451Z" level=info msg="StartContainer for \"e9f17b2569cfd9fbbddd69b772f438ff61b4c1dbd1cb1f11c0de831551eaed69\" returns successfully" Nov 1 01:29:26.097000 audit[2773]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2773 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:29:26.097000 audit[2773]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdd8d80980 a2=0 a3=7ffdd8d8096c items=0 ppid=2709 pid=2773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.097000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Nov 1 01:29:26.098000 audit[2774]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2774 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:29:26.098000 audit[2774]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc5168bd90 a2=0 a3=7ffc5168bd7c items=0 ppid=2709 pid=2774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.098000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Nov 1 01:29:26.101000 audit[2776]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_chain pid=2776 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:29:26.101000 audit[2776]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff51ccefd0 a2=0 a3=7fff51ccefbc items=0 ppid=2709 pid=2776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.101000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Nov 1 01:29:26.101000 audit[2777]: NETFILTER_CFG table=nat:41 family=10 entries=1 op=nft_register_chain pid=2777 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:29:26.101000 audit[2777]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcfb7f9540 a2=0 a3=7ffcfb7f952c items=0 ppid=2709 pid=2777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.101000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Nov 1 01:29:26.104000 audit[2779]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_chain pid=2779 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:29:26.104000 audit[2779]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffedea2ce80 a2=0 a3=7ffedea2ce6c items=0 ppid=2709 pid=2779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.104000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Nov 1 01:29:26.104000 audit[2780]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2780 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:29:26.104000 audit[2780]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff87598060 a2=0 a3=7fff8759804c items=0 ppid=2709 pid=2780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.104000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Nov 1 01:29:26.203000 audit[2781]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2781 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:29:26.203000 audit[2781]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff6eb98910 a2=0 a3=7fff6eb988fc items=0 ppid=2709 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.203000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Nov 1 01:29:26.210000 audit[2783]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2783 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:29:26.210000 audit[2783]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffcda29cfc0 a2=0 a3=7ffcda29cfac items=0 ppid=2709 pid=2783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.210000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73002D Nov 1 01:29:26.220000 audit[2786]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2786 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:29:26.220000 audit[2786]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffeb88ff6c0 a2=0 a3=7ffeb88ff6ac items=0 ppid=2709 pid=2786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.220000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Nov 1 01:29:26.222000 audit[2787]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2787 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:29:26.222000 audit[2787]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe524c41f0 a2=0 a3=7ffe524c41dc items=0 ppid=2709 pid=2787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.222000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Nov 1 01:29:26.229000 audit[2789]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2789 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:29:26.229000 audit[2789]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff1783ab00 a2=0 a3=7fff1783aaec items=0 ppid=2709 pid=2789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.229000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Nov 1 01:29:26.231000 audit[2790]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2790 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:29:26.231000 audit[2790]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcb8b62630 a2=0 a3=7ffcb8b6261c items=0 ppid=2709 pid=2790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.231000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Nov 1 01:29:26.238000 audit[2792]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2792 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:29:26.238000 audit[2792]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd214eef40 a2=0 a3=7ffd214eef2c items=0 ppid=2709 pid=2792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.238000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 01:29:26.247000 audit[2795]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2795 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:29:26.247000 audit[2795]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffca6074230 a2=0 a3=7ffca607421c items=0 ppid=2709 pid=2795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.247000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 01:29:26.250000 audit[2796]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2796 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:29:26.250000 audit[2796]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffebfc57500 a2=0 a3=7ffebfc574ec items=0 ppid=2709 pid=2796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.250000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Nov 1 01:29:26.256000 audit[2798]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2798 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:29:26.256000 audit[2798]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff896012f0 a2=0 a3=7fff896012dc items=0 ppid=2709 pid=2798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.256000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Nov 1 01:29:26.259000 audit[2799]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2799 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:29:26.259000 audit[2799]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc31f7f650 a2=0 a3=7ffc31f7f63c items=0 ppid=2709 pid=2799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.259000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Nov 1 01:29:26.265000 audit[2801]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2801 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:29:26.265000 audit[2801]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff6cf35f60 a2=0 a3=7fff6cf35f4c items=0 ppid=2709 pid=2801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.265000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F5859 Nov 1 01:29:26.275000 audit[2804]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2804 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:29:26.275000 audit[2804]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd4dba6890 a2=0 a3=7ffd4dba687c items=0 ppid=2709 pid=2804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.275000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Nov 1 01:29:26.284000 audit[2807]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2807 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:29:26.284000 audit[2807]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd9ab46940 a2=0 a3=7ffd9ab4692c items=0 ppid=2709 pid=2807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.284000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Nov 1 01:29:26.287000 audit[2808]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2808 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:29:26.287000 audit[2808]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe4e0841e0 a2=0 a3=7ffe4e0841cc items=0 ppid=2709 pid=2808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.287000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Nov 1 01:29:26.293000 audit[2810]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2810 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:29:26.293000 audit[2810]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd59943e00 a2=0 a3=7ffd59943dec items=0 ppid=2709 pid=2810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.293000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 01:29:26.301000 audit[2813]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2813 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:29:26.301000 audit[2813]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcf962bb50 a2=0 a3=7ffcf962bb3c items=0 ppid=2709 pid=2813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.301000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 01:29:26.304000 audit[2814]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2814 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:29:26.304000 audit[2814]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd9e6f0250 a2=0 a3=7ffd9e6f023c items=0 ppid=2709 pid=2814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.304000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Nov 1 01:29:26.310000 audit[2816]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2816 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:29:26.310000 audit[2816]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffcad052ee0 a2=0 a3=7ffcad052ecc items=0 ppid=2709 pid=2816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.310000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Nov 1 01:29:26.416000 audit[2822]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2822 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:29:26.416000 audit[2822]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe5fa599d0 a2=0 a3=7ffe5fa599bc items=0 ppid=2709 pid=2822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.416000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:29:26.449000 audit[2822]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2822 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:29:26.449000 audit[2822]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffe5fa599d0 a2=0 a3=7ffe5fa599bc items=0 ppid=2709 pid=2822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.449000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:29:26.452000 audit[2827]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2827 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:29:26.452000 audit[2827]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff66d50a50 a2=0 a3=7fff66d50a3c items=0 ppid=2709 pid=2827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.452000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Nov 1 01:29:26.459000 audit[2829]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2829 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:29:26.459000 audit[2829]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff8d8a0310 a2=0 a3=7fff8d8a02fc items=0 ppid=2709 pid=2829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.459000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Nov 1 01:29:26.469000 audit[2832]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2832 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:29:26.469000 audit[2832]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe53e37d50 a2=0 a3=7ffe53e37d3c items=0 ppid=2709 pid=2832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.469000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C Nov 1 01:29:26.472000 audit[2833]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2833 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:29:26.472000 audit[2833]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc51100d70 a2=0 a3=7ffc51100d5c items=0 ppid=2709 pid=2833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.472000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Nov 1 01:29:26.478000 audit[2835]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2835 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:29:26.478000 audit[2835]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff15f90d30 a2=0 a3=7fff15f90d1c items=0 ppid=2709 pid=2835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.478000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Nov 1 01:29:26.480000 audit[2836]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2836 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:29:26.480000 audit[2836]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd526ba220 a2=0 a3=7ffd526ba20c items=0 ppid=2709 pid=2836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.480000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Nov 1 01:29:26.487000 audit[2838]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2838 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:29:26.487000 audit[2838]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc5870ff80 a2=0 a3=7ffc5870ff6c items=0 ppid=2709 pid=2838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.487000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 01:29:26.497000 audit[2841]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2841 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:29:26.497000 audit[2841]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffeae5a1ce0 a2=0 a3=7ffeae5a1ccc items=0 ppid=2709 pid=2841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.497000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 01:29:26.499000 audit[2842]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2842 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:29:26.499000 audit[2842]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc8a4db8d0 a2=0 a3=7ffc8a4db8bc items=0 ppid=2709 pid=2842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.499000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Nov 1 01:29:26.505000 audit[2844]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2844 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:29:26.505000 audit[2844]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff1a3cbd80 a2=0 a3=7fff1a3cbd6c items=0 ppid=2709 pid=2844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.505000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Nov 1 01:29:26.508000 audit[2845]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2845 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:29:26.508000 audit[2845]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd19320e80 a2=0 a3=7ffd19320e6c items=0 ppid=2709 pid=2845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.508000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Nov 1 01:29:26.515000 audit[2847]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2847 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:29:26.515000 audit[2847]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffd96bab90 a2=0 a3=7fffd96bab7c items=0 ppid=2709 pid=2847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.515000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Nov 1 01:29:26.524000 audit[2850]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2850 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:29:26.524000 audit[2850]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcfc4fc770 a2=0 a3=7ffcfc4fc75c items=0 ppid=2709 pid=2850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.524000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Nov 1 01:29:26.533000 audit[2853]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2853 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:29:26.533000 audit[2853]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffee7cdd00 a2=0 a3=7fffee7cdcec items=0 ppid=2709 pid=2853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.533000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D5052 Nov 1 01:29:26.536000 audit[2854]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2854 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:29:26.536000 audit[2854]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff6970a060 a2=0 a3=7fff6970a04c items=0 ppid=2709 pid=2854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.536000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Nov 1 01:29:26.541000 audit[2856]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2856 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:29:26.541000 audit[2856]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffeb0c37f70 a2=0 a3=7ffeb0c37f5c items=0 ppid=2709 pid=2856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.541000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 01:29:26.550000 audit[2859]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2859 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:29:26.550000 audit[2859]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffd10e4afa0 a2=0 a3=7ffd10e4af8c items=0 ppid=2709 pid=2859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.550000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 01:29:26.553000 audit[2860]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2860 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:29:26.553000 audit[2860]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffb7805030 a2=0 a3=7fffb780501c items=0 ppid=2709 pid=2860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.553000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Nov 1 01:29:26.558000 audit[2862]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2862 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:29:26.558000 audit[2862]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff7ed5c780 a2=0 a3=7fff7ed5c76c items=0 ppid=2709 pid=2862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.558000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Nov 1 01:29:26.561000 audit[2863]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2863 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:29:26.561000 audit[2863]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff2dec87e0 a2=0 a3=7fff2dec87cc items=0 ppid=2709 pid=2863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.561000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Nov 1 01:29:26.567000 audit[2865]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2865 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:29:26.567000 audit[2865]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdfa377330 a2=0 a3=7ffdfa37731c items=0 ppid=2709 pid=2865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.567000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Nov 1 01:29:26.576000 audit[2868]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2868 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:29:26.576000 audit[2868]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffff155dd10 a2=0 a3=7ffff155dcfc items=0 ppid=2709 pid=2868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.576000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Nov 1 01:29:26.584000 audit[2870]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2870 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Nov 1 01:29:26.584000 audit[2870]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffccbf48f80 a2=0 a3=7ffccbf48f6c items=0 ppid=2709 pid=2870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.584000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:29:26.586000 audit[2870]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2870 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Nov 1 01:29:26.586000 audit[2870]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffccbf48f80 a2=0 a3=7ffccbf48f6c items=0 ppid=2709 pid=2870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:26.586000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:29:28.172591 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2332069820.mount: Deactivated successfully. Nov 1 01:29:28.683710 env[1561]: time="2025-11-01T01:29:28.683685719Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:28.684242 env[1561]: time="2025-11-01T01:29:28.684229048Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:28.684862 env[1561]: time="2025-11-01T01:29:28.684849290Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:28.685546 env[1561]: time="2025-11-01T01:29:28.685534911Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:28.686140 env[1561]: time="2025-11-01T01:29:28.686126648Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 1 01:29:28.688009 env[1561]: time="2025-11-01T01:29:28.687994684Z" level=info msg="CreateContainer within sandbox \"8d440c9ac29f13f6e22b1a6bfdc7b6acdefc3cce877c13d6b9abfc24f204249c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 1 01:29:28.692801 env[1561]: time="2025-11-01T01:29:28.692782430Z" level=info msg="CreateContainer within sandbox \"8d440c9ac29f13f6e22b1a6bfdc7b6acdefc3cce877c13d6b9abfc24f204249c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"dbb8365d076ff9b7456132debfebc3e3008fc745d66a4587cc21934b5f9719d4\"" Nov 1 01:29:28.693133 env[1561]: time="2025-11-01T01:29:28.693099579Z" level=info msg="StartContainer for \"dbb8365d076ff9b7456132debfebc3e3008fc745d66a4587cc21934b5f9719d4\"" Nov 1 01:29:28.716527 systemd[1]: Started cri-containerd-dbb8365d076ff9b7456132debfebc3e3008fc745d66a4587cc21934b5f9719d4.scope. Nov 1 01:29:28.720000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:28.720000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:28.720000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:28.720000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:28.720000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:28.720000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:28.720000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:28.720000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:28.720000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:28.720000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:28.720000 audit: BPF prog-id=117 op=LOAD Nov 1 01:29:28.720000 audit[2878]: AVC avc: denied { bpf } for pid=2878 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:28.720000 audit[2878]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=2651 pid=2878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:28.720000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462623833363564303736666639623734353631333264656266656263 Nov 1 01:29:28.720000 audit[2878]: AVC avc: denied { perfmon } for pid=2878 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:28.720000 audit[2878]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=8 items=0 ppid=2651 pid=2878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:28.720000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462623833363564303736666639623734353631333264656266656263 Nov 1 01:29:28.720000 audit[2878]: AVC avc: denied { bpf } for pid=2878 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:28.720000 audit[2878]: AVC avc: denied { bpf } for pid=2878 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:28.720000 audit[2878]: AVC avc: denied { bpf } for pid=2878 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:28.720000 audit[2878]: AVC avc: denied { perfmon } for pid=2878 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:28.720000 audit[2878]: AVC avc: denied { perfmon } for pid=2878 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:28.720000 audit[2878]: AVC avc: denied { perfmon } for pid=2878 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:28.720000 audit[2878]: AVC avc: denied { perfmon } for pid=2878 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:28.720000 audit[2878]: AVC avc: denied { perfmon } for pid=2878 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:28.720000 audit[2878]: AVC avc: denied { bpf } for pid=2878 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:28.720000 audit[2878]: AVC avc: denied { bpf } for pid=2878 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:28.720000 audit: BPF prog-id=118 op=LOAD Nov 1 01:29:28.720000 audit[2878]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c0001894b0 items=0 ppid=2651 pid=2878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:28.720000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462623833363564303736666639623734353631333264656266656263 Nov 1 01:29:28.720000 audit[2878]: AVC avc: denied { bpf } for pid=2878 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:28.720000 audit[2878]: AVC avc: denied { bpf } for pid=2878 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:28.720000 audit[2878]: AVC avc: denied { perfmon } for pid=2878 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:28.720000 audit[2878]: AVC avc: denied { perfmon } for pid=2878 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:28.720000 audit[2878]: AVC avc: denied { perfmon } for pid=2878 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:28.720000 audit[2878]: AVC avc: denied { perfmon } for pid=2878 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:28.720000 audit[2878]: AVC avc: denied { perfmon } for pid=2878 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:28.720000 audit[2878]: AVC avc: denied { bpf } for pid=2878 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:28.720000 audit[2878]: AVC avc: denied { bpf } for pid=2878 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:28.720000 audit: BPF prog-id=119 op=LOAD Nov 1 01:29:28.720000 audit[2878]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c0001894f8 items=0 ppid=2651 pid=2878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:28.720000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462623833363564303736666639623734353631333264656266656263 Nov 1 01:29:28.721000 audit: BPF prog-id=119 op=UNLOAD Nov 1 01:29:28.721000 audit: BPF prog-id=118 op=UNLOAD Nov 1 01:29:28.721000 audit[2878]: AVC avc: denied { bpf } for pid=2878 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:28.721000 audit[2878]: AVC avc: denied { bpf } for pid=2878 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:28.721000 audit[2878]: AVC avc: denied { bpf } for pid=2878 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:28.721000 audit[2878]: AVC avc: denied { perfmon } for pid=2878 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:28.721000 audit[2878]: AVC avc: denied { perfmon } for pid=2878 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:28.721000 audit[2878]: AVC avc: denied { perfmon } for pid=2878 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:28.721000 audit[2878]: AVC avc: denied { perfmon } for pid=2878 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:28.721000 audit[2878]: AVC avc: denied { perfmon } for pid=2878 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:28.721000 audit[2878]: AVC avc: denied { bpf } for pid=2878 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:28.721000 audit[2878]: AVC avc: denied { bpf } for pid=2878 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:28.721000 audit: BPF prog-id=120 op=LOAD Nov 1 01:29:28.721000 audit[2878]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c000189908 items=0 ppid=2651 pid=2878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:28.721000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462623833363564303736666639623734353631333264656266656263 Nov 1 01:29:28.727957 env[1561]: time="2025-11-01T01:29:28.727933551Z" level=info msg="StartContainer for \"dbb8365d076ff9b7456132debfebc3e3008fc745d66a4587cc21934b5f9719d4\" returns successfully" Nov 1 01:29:29.200174 kubelet[2505]: I1101 01:29:29.200035 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-l5whp" podStartSLOduration=5.199987932 podStartE2EDuration="5.199987932s" podCreationTimestamp="2025-11-01 01:29:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:29:26.206734574 +0000 UTC m=+6.112546809" watchObservedRunningTime="2025-11-01 01:29:29.199987932 +0000 UTC m=+9.105800176" Nov 1 01:29:29.201186 kubelet[2505]: I1101 01:29:29.200507 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-v4r7p" podStartSLOduration=2.31923478 podStartE2EDuration="5.200469424s" podCreationTimestamp="2025-11-01 01:29:24 +0000 UTC" firstStartedPulling="2025-11-01 01:29:25.805369343 +0000 UTC m=+5.711181527" lastFinishedPulling="2025-11-01 01:29:28.686603986 +0000 UTC m=+8.592416171" observedRunningTime="2025-11-01 01:29:29.199789318 +0000 UTC m=+9.105601577" watchObservedRunningTime="2025-11-01 01:29:29.200469424 +0000 UTC m=+9.106281678" Nov 1 01:29:33.387747 sudo[1768]: pam_unix(sudo:session): session closed for user root Nov 1 01:29:33.387000 audit[1768]: USER_END pid=1768 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 01:29:33.388631 sshd[1765]: pam_unix(sshd:session): session closed for user core Nov 1 01:29:33.390439 systemd[1]: sshd@8-139.178.94.15:22-147.75.109.163:50102.service: Deactivated successfully. Nov 1 01:29:33.391163 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 01:29:33.391298 systemd[1]: session-11.scope: Consumed 4.026s CPU time. Nov 1 01:29:33.391989 systemd-logind[1596]: Session 11 logged out. Waiting for processes to exit. Nov 1 01:29:33.392474 systemd-logind[1596]: Removed session 11. Nov 1 01:29:33.414194 kernel: kauditd_printk_skb: 359 callbacks suppressed Nov 1 01:29:33.414427 kernel: audit: type=1106 audit(1761960573.387:920): pid=1768 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 01:29:33.387000 audit[1768]: CRED_DISP pid=1768 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 01:29:33.588458 kernel: audit: type=1104 audit(1761960573.387:921): pid=1768 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 01:29:33.588543 kernel: audit: type=1106 audit(1761960573.388:922): pid=1765 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:29:33.388000 audit[1765]: USER_END pid=1765 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:29:33.388000 audit[1765]: CRED_DISP pid=1765 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:29:33.771744 kernel: audit: type=1104 audit(1761960573.388:923): pid=1765 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:29:33.771849 kernel: audit: type=1131 audit(1761960573.390:924): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.94.15:22-147.75.109.163:50102 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:29:33.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.94.15:22-147.75.109.163:50102 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:29:33.859784 kernel: audit: type=1325 audit(1761960573.739:925): table=filter:89 family=2 entries=15 op=nft_register_rule pid=3039 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:29:33.739000 audit[3039]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=3039 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:29:33.739000 audit[3039]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffca70cd670 a2=0 a3=7ffca70cd65c items=0 ppid=2709 pid=3039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:34.014874 kernel: audit: type=1300 audit(1761960573.739:925): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffca70cd670 a2=0 a3=7ffca70cd65c items=0 ppid=2709 pid=3039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:34.014952 kernel: audit: type=1327 audit(1761960573.739:925): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:29:33.739000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:29:34.071000 audit[3039]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=3039 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:29:34.071000 audit[3039]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffca70cd670 a2=0 a3=0 items=0 ppid=2709 pid=3039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:34.226838 kernel: audit: type=1325 audit(1761960574.071:926): table=nat:90 family=2 entries=12 op=nft_register_rule pid=3039 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:29:34.226902 kernel: audit: type=1300 audit(1761960574.071:926): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffca70cd670 a2=0 a3=0 items=0 ppid=2709 pid=3039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:34.071000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:29:35.232000 audit[3042]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=3042 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:29:35.232000 audit[3042]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc424b7a10 a2=0 a3=7ffc424b79fc items=0 ppid=2709 pid=3042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:35.232000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:29:35.245000 audit[3042]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=3042 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:29:35.245000 audit[3042]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc424b7a10 a2=0 a3=0 items=0 ppid=2709 pid=3042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:35.245000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:29:36.264000 audit[3044]: NETFILTER_CFG table=filter:93 family=2 entries=18 op=nft_register_rule pid=3044 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:29:36.264000 audit[3044]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffec58d9860 a2=0 a3=7ffec58d984c items=0 ppid=2709 pid=3044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:36.264000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:29:36.283000 audit[3044]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=3044 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:29:36.283000 audit[3044]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffec58d9860 a2=0 a3=0 items=0 ppid=2709 pid=3044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:36.283000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:29:37.481000 audit[3046]: NETFILTER_CFG table=filter:95 family=2 entries=21 op=nft_register_rule pid=3046 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:29:37.481000 audit[3046]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffed5475100 a2=0 a3=7ffed54750ec items=0 ppid=2709 pid=3046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:37.481000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:29:37.501000 audit[3046]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=3046 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:29:37.501000 audit[3046]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffed5475100 a2=0 a3=0 items=0 ppid=2709 pid=3046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:37.501000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:29:37.522012 systemd[1]: Created slice kubepods-besteffort-podf62f92ed_be8a_4d64_a329_0d8e5e3a7645.slice. Nov 1 01:29:37.558489 kubelet[2505]: I1101 01:29:37.558447 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f62f92ed-be8a-4d64-a329-0d8e5e3a7645-tigera-ca-bundle\") pod \"calico-typha-7548fd47d8-bt7pq\" (UID: \"f62f92ed-be8a-4d64-a329-0d8e5e3a7645\") " pod="calico-system/calico-typha-7548fd47d8-bt7pq" Nov 1 01:29:37.558968 kubelet[2505]: I1101 01:29:37.558501 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f62f92ed-be8a-4d64-a329-0d8e5e3a7645-typha-certs\") pod \"calico-typha-7548fd47d8-bt7pq\" (UID: \"f62f92ed-be8a-4d64-a329-0d8e5e3a7645\") " pod="calico-system/calico-typha-7548fd47d8-bt7pq" Nov 1 01:29:37.558968 kubelet[2505]: I1101 01:29:37.558526 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6j52\" (UniqueName: \"kubernetes.io/projected/f62f92ed-be8a-4d64-a329-0d8e5e3a7645-kube-api-access-w6j52\") pod \"calico-typha-7548fd47d8-bt7pq\" (UID: \"f62f92ed-be8a-4d64-a329-0d8e5e3a7645\") " pod="calico-system/calico-typha-7548fd47d8-bt7pq" Nov 1 01:29:37.723004 systemd[1]: Created slice kubepods-besteffort-podaca8b418_b773_475a_b6a2_ccd0ad54031e.slice. Nov 1 01:29:37.759726 kubelet[2505]: I1101 01:29:37.759614 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aca8b418-b773-475a-b6a2-ccd0ad54031e-lib-modules\") pod \"calico-node-xr2tp\" (UID: \"aca8b418-b773-475a-b6a2-ccd0ad54031e\") " pod="calico-system/calico-node-xr2tp" Nov 1 01:29:37.759726 kubelet[2505]: I1101 01:29:37.759684 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/aca8b418-b773-475a-b6a2-ccd0ad54031e-node-certs\") pod \"calico-node-xr2tp\" (UID: \"aca8b418-b773-475a-b6a2-ccd0ad54031e\") " pod="calico-system/calico-node-xr2tp" Nov 1 01:29:37.759981 kubelet[2505]: I1101 01:29:37.759728 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/aca8b418-b773-475a-b6a2-ccd0ad54031e-var-lib-calico\") pod \"calico-node-xr2tp\" (UID: \"aca8b418-b773-475a-b6a2-ccd0ad54031e\") " pod="calico-system/calico-node-xr2tp" Nov 1 01:29:37.759981 kubelet[2505]: I1101 01:29:37.759757 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/aca8b418-b773-475a-b6a2-ccd0ad54031e-cni-bin-dir\") pod \"calico-node-xr2tp\" (UID: \"aca8b418-b773-475a-b6a2-ccd0ad54031e\") " pod="calico-system/calico-node-xr2tp" Nov 1 01:29:37.759981 kubelet[2505]: I1101 01:29:37.759777 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/aca8b418-b773-475a-b6a2-ccd0ad54031e-cni-log-dir\") pod \"calico-node-xr2tp\" (UID: \"aca8b418-b773-475a-b6a2-ccd0ad54031e\") " pod="calico-system/calico-node-xr2tp" Nov 1 01:29:37.759981 kubelet[2505]: I1101 01:29:37.759838 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/aca8b418-b773-475a-b6a2-ccd0ad54031e-var-run-calico\") pod \"calico-node-xr2tp\" (UID: \"aca8b418-b773-475a-b6a2-ccd0ad54031e\") " pod="calico-system/calico-node-xr2tp" Nov 1 01:29:37.759981 kubelet[2505]: I1101 01:29:37.759883 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aca8b418-b773-475a-b6a2-ccd0ad54031e-xtables-lock\") pod \"calico-node-xr2tp\" (UID: \"aca8b418-b773-475a-b6a2-ccd0ad54031e\") " pod="calico-system/calico-node-xr2tp" Nov 1 01:29:37.760286 kubelet[2505]: I1101 01:29:37.759919 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/aca8b418-b773-475a-b6a2-ccd0ad54031e-policysync\") pod \"calico-node-xr2tp\" (UID: \"aca8b418-b773-475a-b6a2-ccd0ad54031e\") " pod="calico-system/calico-node-xr2tp" Nov 1 01:29:37.760286 kubelet[2505]: I1101 01:29:37.759958 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/aca8b418-b773-475a-b6a2-ccd0ad54031e-cni-net-dir\") pod \"calico-node-xr2tp\" (UID: \"aca8b418-b773-475a-b6a2-ccd0ad54031e\") " pod="calico-system/calico-node-xr2tp" Nov 1 01:29:37.760286 kubelet[2505]: I1101 01:29:37.759990 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nwkj\" (UniqueName: \"kubernetes.io/projected/aca8b418-b773-475a-b6a2-ccd0ad54031e-kube-api-access-4nwkj\") pod \"calico-node-xr2tp\" (UID: \"aca8b418-b773-475a-b6a2-ccd0ad54031e\") " pod="calico-system/calico-node-xr2tp" Nov 1 01:29:37.760286 kubelet[2505]: I1101 01:29:37.760033 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/aca8b418-b773-475a-b6a2-ccd0ad54031e-flexvol-driver-host\") pod \"calico-node-xr2tp\" (UID: \"aca8b418-b773-475a-b6a2-ccd0ad54031e\") " pod="calico-system/calico-node-xr2tp" Nov 1 01:29:37.760286 kubelet[2505]: I1101 01:29:37.760068 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aca8b418-b773-475a-b6a2-ccd0ad54031e-tigera-ca-bundle\") pod \"calico-node-xr2tp\" (UID: \"aca8b418-b773-475a-b6a2-ccd0ad54031e\") " pod="calico-system/calico-node-xr2tp" Nov 1 01:29:37.828519 env[1561]: time="2025-11-01T01:29:37.828391423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7548fd47d8-bt7pq,Uid:f62f92ed-be8a-4d64-a329-0d8e5e3a7645,Namespace:calico-system,Attempt:0,}" Nov 1 01:29:37.852943 env[1561]: time="2025-11-01T01:29:37.852759231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:29:37.852943 env[1561]: time="2025-11-01T01:29:37.852880611Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:29:37.852943 env[1561]: time="2025-11-01T01:29:37.852921758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:29:37.853666 env[1561]: time="2025-11-01T01:29:37.853337533Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6c6ac1adf0db4cfaf588fef3bd6bbf25d265f1b884c18839c70a2227b95da270 pid=3056 runtime=io.containerd.runc.v2 Nov 1 01:29:37.864826 kubelet[2505]: E1101 01:29:37.864742 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:37.864826 kubelet[2505]: W1101 01:29:37.864808 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:37.865243 kubelet[2505]: E1101 01:29:37.864868 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:37.869770 kubelet[2505]: E1101 01:29:37.869702 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:37.869770 kubelet[2505]: W1101 01:29:37.869762 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:37.870254 kubelet[2505]: E1101 01:29:37.869827 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:37.885198 kubelet[2505]: E1101 01:29:37.885159 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:37.885198 kubelet[2505]: W1101 01:29:37.885183 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:37.885507 kubelet[2505]: E1101 01:29:37.885209 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:37.889322 systemd[1]: Started cri-containerd-6c6ac1adf0db4cfaf588fef3bd6bbf25d265f1b884c18839c70a2227b95da270.scope. Nov 1 01:29:37.892925 kubelet[2505]: E1101 01:29:37.892871 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9wz7k" podUID="79df0ba2-6e86-422c-8f93-652dfb942b69" Nov 1 01:29:37.897000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:37.897000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:37.897000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:37.897000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:37.897000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:37.897000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:37.897000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:37.897000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:37.897000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:37.898000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:37.898000 audit: BPF prog-id=121 op=LOAD Nov 1 01:29:37.898000 audit[3065]: AVC avc: denied { bpf } for pid=3065 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:37.898000 audit[3065]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=3056 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:37.898000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663366163316164663064623463666166353838666566336264366262 Nov 1 01:29:37.898000 audit[3065]: AVC avc: denied { perfmon } for pid=3065 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:37.898000 audit[3065]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=3056 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:37.898000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663366163316164663064623463666166353838666566336264366262 Nov 1 01:29:37.898000 audit[3065]: AVC avc: denied { bpf } for pid=3065 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:37.898000 audit[3065]: AVC avc: denied { bpf } for pid=3065 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:37.898000 audit[3065]: AVC avc: denied { bpf } for pid=3065 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:37.898000 audit[3065]: AVC avc: denied { perfmon } for pid=3065 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:37.898000 audit[3065]: AVC avc: denied { perfmon } for pid=3065 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:37.898000 audit[3065]: AVC avc: denied { perfmon } for pid=3065 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:37.898000 audit[3065]: AVC avc: denied { perfmon } for pid=3065 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:37.898000 audit[3065]: AVC avc: denied { perfmon } for pid=3065 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:37.898000 audit[3065]: AVC avc: denied { bpf } for pid=3065 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:37.898000 audit[3065]: AVC avc: denied { bpf } for pid=3065 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:37.898000 audit: BPF prog-id=122 op=LOAD Nov 1 01:29:37.898000 audit[3065]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c0002214f0 items=0 ppid=3056 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:37.898000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663366163316164663064623463666166353838666566336264366262 Nov 1 01:29:37.898000 audit[3065]: AVC avc: denied { bpf } for pid=3065 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:37.898000 audit[3065]: AVC avc: denied { bpf } for pid=3065 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:37.898000 audit[3065]: AVC avc: denied { perfmon } for pid=3065 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:37.898000 audit[3065]: AVC avc: denied { perfmon } for pid=3065 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:37.898000 audit[3065]: AVC avc: denied { perfmon } for pid=3065 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:37.898000 audit[3065]: AVC avc: denied { perfmon } for pid=3065 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:37.898000 audit[3065]: AVC avc: denied { perfmon } for pid=3065 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:37.898000 audit[3065]: AVC avc: denied { bpf } for pid=3065 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:37.898000 audit[3065]: AVC avc: denied { bpf } for pid=3065 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:37.898000 audit: BPF prog-id=123 op=LOAD Nov 1 01:29:37.898000 audit[3065]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c000221538 items=0 ppid=3056 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:37.898000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663366163316164663064623463666166353838666566336264366262 Nov 1 01:29:37.898000 audit: BPF prog-id=123 op=UNLOAD Nov 1 01:29:37.898000 audit: BPF prog-id=122 op=UNLOAD Nov 1 01:29:37.898000 audit[3065]: AVC avc: denied { bpf } for pid=3065 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:37.898000 audit[3065]: AVC avc: denied { bpf } for pid=3065 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:37.898000 audit[3065]: AVC avc: denied { bpf } for pid=3065 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:37.898000 audit[3065]: AVC avc: denied { perfmon } for pid=3065 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:37.898000 audit[3065]: AVC avc: denied { perfmon } for pid=3065 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:37.898000 audit[3065]: AVC avc: denied { perfmon } for pid=3065 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:37.898000 audit[3065]: AVC avc: denied { perfmon } for pid=3065 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:37.898000 audit[3065]: AVC avc: denied { perfmon } for pid=3065 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:37.898000 audit[3065]: AVC avc: denied { bpf } for pid=3065 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:37.898000 audit[3065]: AVC avc: denied { bpf } for pid=3065 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:37.898000 audit: BPF prog-id=124 op=LOAD Nov 1 01:29:37.898000 audit[3065]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c000221948 items=0 ppid=3056 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:37.898000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663366163316164663064623463666166353838666566336264366262 Nov 1 01:29:37.921791 env[1561]: time="2025-11-01T01:29:37.921760865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7548fd47d8-bt7pq,Uid:f62f92ed-be8a-4d64-a329-0d8e5e3a7645,Namespace:calico-system,Attempt:0,} returns sandbox id \"6c6ac1adf0db4cfaf588fef3bd6bbf25d265f1b884c18839c70a2227b95da270\"" Nov 1 01:29:37.922607 env[1561]: time="2025-11-01T01:29:37.922591053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 1 01:29:37.956372 kubelet[2505]: E1101 01:29:37.956314 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:37.956372 kubelet[2505]: W1101 01:29:37.956361 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:37.956827 kubelet[2505]: E1101 01:29:37.956439 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:37.956968 kubelet[2505]: E1101 01:29:37.956884 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:37.956968 kubelet[2505]: W1101 01:29:37.956911 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:37.956968 kubelet[2505]: E1101 01:29:37.956940 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:37.957414 kubelet[2505]: E1101 01:29:37.957369 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:37.957559 kubelet[2505]: W1101 01:29:37.957430 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:37.957559 kubelet[2505]: E1101 01:29:37.957463 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:37.957975 kubelet[2505]: E1101 01:29:37.957942 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:37.957975 kubelet[2505]: W1101 01:29:37.957969 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:37.958215 kubelet[2505]: E1101 01:29:37.957998 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:37.958472 kubelet[2505]: E1101 01:29:37.958426 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:37.958472 kubelet[2505]: W1101 01:29:37.958458 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:37.958736 kubelet[2505]: E1101 01:29:37.958487 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:37.958904 kubelet[2505]: E1101 01:29:37.958853 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:37.958904 kubelet[2505]: W1101 01:29:37.958876 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:37.958904 kubelet[2505]: E1101 01:29:37.958899 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:37.959286 kubelet[2505]: E1101 01:29:37.959251 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:37.959286 kubelet[2505]: W1101 01:29:37.959279 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:37.959549 kubelet[2505]: E1101 01:29:37.959302 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:37.959802 kubelet[2505]: E1101 01:29:37.959753 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:37.959802 kubelet[2505]: W1101 01:29:37.959783 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:37.960038 kubelet[2505]: E1101 01:29:37.959807 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:37.960244 kubelet[2505]: E1101 01:29:37.960213 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:37.960244 kubelet[2505]: W1101 01:29:37.960236 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:37.960668 kubelet[2505]: E1101 01:29:37.960259 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:37.960668 kubelet[2505]: E1101 01:29:37.960638 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:37.960668 kubelet[2505]: W1101 01:29:37.960660 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:37.961056 kubelet[2505]: E1101 01:29:37.960683 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:37.961176 kubelet[2505]: E1101 01:29:37.961072 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:37.961176 kubelet[2505]: W1101 01:29:37.961095 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:37.961176 kubelet[2505]: E1101 01:29:37.961120 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:37.961561 kubelet[2505]: E1101 01:29:37.961492 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:37.961561 kubelet[2505]: W1101 01:29:37.961514 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:37.961561 kubelet[2505]: E1101 01:29:37.961537 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:37.962012 kubelet[2505]: E1101 01:29:37.961983 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:37.962012 kubelet[2505]: W1101 01:29:37.962009 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:37.962284 kubelet[2505]: E1101 01:29:37.962035 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:37.962476 kubelet[2505]: E1101 01:29:37.962449 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:37.962476 kubelet[2505]: W1101 01:29:37.962473 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:37.962735 kubelet[2505]: E1101 01:29:37.962497 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:37.963003 kubelet[2505]: E1101 01:29:37.962968 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:37.963003 kubelet[2505]: W1101 01:29:37.962994 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:37.963432 kubelet[2505]: E1101 01:29:37.963019 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:37.963656 kubelet[2505]: E1101 01:29:37.963433 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:37.963656 kubelet[2505]: W1101 01:29:37.963457 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:37.963656 kubelet[2505]: E1101 01:29:37.963479 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:37.964150 kubelet[2505]: E1101 01:29:37.963994 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:37.964150 kubelet[2505]: W1101 01:29:37.964019 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:37.964150 kubelet[2505]: E1101 01:29:37.964053 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:37.964618 kubelet[2505]: E1101 01:29:37.964581 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:37.964767 kubelet[2505]: W1101 01:29:37.964624 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:37.964767 kubelet[2505]: E1101 01:29:37.964667 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:37.965207 kubelet[2505]: E1101 01:29:37.965171 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:37.965427 kubelet[2505]: W1101 01:29:37.965216 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:37.965427 kubelet[2505]: E1101 01:29:37.965244 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:37.965776 kubelet[2505]: E1101 01:29:37.965742 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:37.965912 kubelet[2505]: W1101 01:29:37.965777 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:37.965912 kubelet[2505]: E1101 01:29:37.965811 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:37.966570 kubelet[2505]: E1101 01:29:37.966526 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:37.966570 kubelet[2505]: W1101 01:29:37.966565 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:37.966876 kubelet[2505]: E1101 01:29:37.966603 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:37.966876 kubelet[2505]: I1101 01:29:37.966657 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/79df0ba2-6e86-422c-8f93-652dfb942b69-kubelet-dir\") pod \"csi-node-driver-9wz7k\" (UID: \"79df0ba2-6e86-422c-8f93-652dfb942b69\") " pod="calico-system/csi-node-driver-9wz7k" Nov 1 01:29:37.967160 kubelet[2505]: E1101 01:29:37.967131 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:37.967279 kubelet[2505]: W1101 01:29:37.967159 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:37.967279 kubelet[2505]: E1101 01:29:37.967185 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:37.967279 kubelet[2505]: I1101 01:29:37.967232 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v97d7\" (UniqueName: \"kubernetes.io/projected/79df0ba2-6e86-422c-8f93-652dfb942b69-kube-api-access-v97d7\") pod \"csi-node-driver-9wz7k\" (UID: \"79df0ba2-6e86-422c-8f93-652dfb942b69\") " pod="calico-system/csi-node-driver-9wz7k" Nov 1 01:29:37.967953 kubelet[2505]: E1101 01:29:37.967885 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:37.967953 kubelet[2505]: W1101 01:29:37.967932 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:37.968204 kubelet[2505]: E1101 01:29:37.967968 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:37.968521 kubelet[2505]: E1101 01:29:37.968464 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:37.968521 kubelet[2505]: W1101 01:29:37.968493 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:37.968820 kubelet[2505]: E1101 01:29:37.968523 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:37.969104 kubelet[2505]: E1101 01:29:37.969040 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:37.969104 kubelet[2505]: W1101 01:29:37.969078 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:37.969369 kubelet[2505]: E1101 01:29:37.969113 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:37.969369 kubelet[2505]: I1101 01:29:37.969175 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/79df0ba2-6e86-422c-8f93-652dfb942b69-registration-dir\") pod \"csi-node-driver-9wz7k\" (UID: \"79df0ba2-6e86-422c-8f93-652dfb942b69\") " pod="calico-system/csi-node-driver-9wz7k" Nov 1 01:29:37.969670 kubelet[2505]: E1101 01:29:37.969637 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:37.969805 kubelet[2505]: W1101 01:29:37.969670 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:37.969805 kubelet[2505]: E1101 01:29:37.969701 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:37.970237 kubelet[2505]: E1101 01:29:37.970210 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:37.970237 kubelet[2505]: W1101 01:29:37.970235 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:37.970492 kubelet[2505]: E1101 01:29:37.970260 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:37.970698 kubelet[2505]: E1101 01:29:37.970673 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:37.970813 kubelet[2505]: W1101 01:29:37.970703 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:37.970813 kubelet[2505]: E1101 01:29:37.970727 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:37.970813 kubelet[2505]: I1101 01:29:37.970774 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/79df0ba2-6e86-422c-8f93-652dfb942b69-varrun\") pod \"csi-node-driver-9wz7k\" (UID: \"79df0ba2-6e86-422c-8f93-652dfb942b69\") " pod="calico-system/csi-node-driver-9wz7k" Nov 1 01:29:37.971358 kubelet[2505]: E1101 01:29:37.971319 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:37.971526 kubelet[2505]: W1101 01:29:37.971361 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:37.971526 kubelet[2505]: E1101 01:29:37.971416 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:37.971942 kubelet[2505]: E1101 01:29:37.971877 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:37.971942 kubelet[2505]: W1101 01:29:37.971905 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:37.971942 kubelet[2505]: E1101 01:29:37.971934 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:37.972465 kubelet[2505]: E1101 01:29:37.972435 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:37.972465 kubelet[2505]: W1101 01:29:37.972463 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:37.972820 kubelet[2505]: E1101 01:29:37.972491 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:37.972820 kubelet[2505]: I1101 01:29:37.972544 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/79df0ba2-6e86-422c-8f93-652dfb942b69-socket-dir\") pod \"csi-node-driver-9wz7k\" (UID: \"79df0ba2-6e86-422c-8f93-652dfb942b69\") " pod="calico-system/csi-node-driver-9wz7k" Nov 1 01:29:37.973195 kubelet[2505]: E1101 01:29:37.973147 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:37.973385 kubelet[2505]: W1101 01:29:37.973199 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:37.973385 kubelet[2505]: E1101 01:29:37.973251 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:37.973871 kubelet[2505]: E1101 01:29:37.973835 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:37.974064 kubelet[2505]: W1101 01:29:37.973868 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:37.974064 kubelet[2505]: E1101 01:29:37.973911 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:37.974556 kubelet[2505]: E1101 01:29:37.974520 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:37.974556 kubelet[2505]: W1101 01:29:37.974552 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:37.974915 kubelet[2505]: E1101 01:29:37.974591 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:37.975162 kubelet[2505]: E1101 01:29:37.975128 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:37.975362 kubelet[2505]: W1101 01:29:37.975162 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:37.975362 kubelet[2505]: E1101 01:29:37.975254 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:38.029204 env[1561]: time="2025-11-01T01:29:38.028965785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xr2tp,Uid:aca8b418-b773-475a-b6a2-ccd0ad54031e,Namespace:calico-system,Attempt:0,}" Nov 1 01:29:38.056264 env[1561]: time="2025-11-01T01:29:38.056022988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:29:38.056264 env[1561]: time="2025-11-01T01:29:38.056178122Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:29:38.056264 env[1561]: time="2025-11-01T01:29:38.056236869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:29:38.057003 env[1561]: time="2025-11-01T01:29:38.056822721Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cc8fb8c2bedf7540c48a3f4de3c6e178c571df758aa2ea0b2d3c4a14841a1655 pid=3136 runtime=io.containerd.runc.v2 Nov 1 01:29:38.073737 kubelet[2505]: E1101 01:29:38.073638 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:38.073737 kubelet[2505]: W1101 01:29:38.073693 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:38.073737 kubelet[2505]: E1101 01:29:38.073735 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:38.074325 kubelet[2505]: E1101 01:29:38.074280 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:38.074325 kubelet[2505]: W1101 01:29:38.074321 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:38.074595 kubelet[2505]: E1101 01:29:38.074356 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:38.075116 kubelet[2505]: E1101 01:29:38.075028 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:38.075116 kubelet[2505]: W1101 01:29:38.075073 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:38.075453 kubelet[2505]: E1101 01:29:38.075124 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:38.075752 kubelet[2505]: E1101 01:29:38.075679 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:38.075752 kubelet[2505]: W1101 01:29:38.075710 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:38.075752 kubelet[2505]: E1101 01:29:38.075740 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:38.076275 kubelet[2505]: E1101 01:29:38.076219 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:38.076275 kubelet[2505]: W1101 01:29:38.076258 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:38.076585 kubelet[2505]: E1101 01:29:38.076300 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:38.076925 kubelet[2505]: E1101 01:29:38.076849 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:38.076925 kubelet[2505]: W1101 01:29:38.076877 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:38.076925 kubelet[2505]: E1101 01:29:38.076909 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:38.077434 kubelet[2505]: E1101 01:29:38.077374 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:38.077434 kubelet[2505]: W1101 01:29:38.077418 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:38.077657 kubelet[2505]: E1101 01:29:38.077449 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:38.078010 kubelet[2505]: E1101 01:29:38.077974 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:38.078010 kubelet[2505]: W1101 01:29:38.078003 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:38.078278 kubelet[2505]: E1101 01:29:38.078031 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:38.078513 kubelet[2505]: E1101 01:29:38.078481 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:38.078513 kubelet[2505]: W1101 01:29:38.078508 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:38.078760 kubelet[2505]: E1101 01:29:38.078533 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:38.078987 kubelet[2505]: E1101 01:29:38.078960 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:38.079107 kubelet[2505]: W1101 01:29:38.078987 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:38.079107 kubelet[2505]: E1101 01:29:38.079023 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:38.079564 kubelet[2505]: E1101 01:29:38.079537 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:38.079564 kubelet[2505]: W1101 01:29:38.079563 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:38.079793 kubelet[2505]: E1101 01:29:38.079586 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:38.080059 kubelet[2505]: E1101 01:29:38.080032 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:38.080187 kubelet[2505]: W1101 01:29:38.080060 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:38.080187 kubelet[2505]: E1101 01:29:38.080086 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:38.080713 kubelet[2505]: E1101 01:29:38.080678 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:38.080834 kubelet[2505]: W1101 01:29:38.080715 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:38.080834 kubelet[2505]: E1101 01:29:38.080749 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:38.081264 kubelet[2505]: E1101 01:29:38.081234 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:38.081380 kubelet[2505]: W1101 01:29:38.081265 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:38.081380 kubelet[2505]: E1101 01:29:38.081294 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:38.081892 kubelet[2505]: E1101 01:29:38.081834 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:38.081892 kubelet[2505]: W1101 01:29:38.081873 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:38.082142 kubelet[2505]: E1101 01:29:38.081908 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:38.082496 kubelet[2505]: E1101 01:29:38.082443 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:38.082496 kubelet[2505]: W1101 01:29:38.082475 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:38.082736 kubelet[2505]: E1101 01:29:38.082506 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:38.083143 kubelet[2505]: E1101 01:29:38.083075 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:38.083143 kubelet[2505]: W1101 01:29:38.083121 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:38.083392 kubelet[2505]: E1101 01:29:38.083155 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:38.083695 kubelet[2505]: E1101 01:29:38.083638 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:38.083695 kubelet[2505]: W1101 01:29:38.083667 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:38.083695 kubelet[2505]: E1101 01:29:38.083693 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:38.084195 kubelet[2505]: E1101 01:29:38.084160 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:38.084195 kubelet[2505]: W1101 01:29:38.084191 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:38.084574 kubelet[2505]: E1101 01:29:38.084228 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:38.084976 kubelet[2505]: E1101 01:29:38.084925 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:38.084976 kubelet[2505]: W1101 01:29:38.084962 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:38.085361 kubelet[2505]: E1101 01:29:38.085010 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:38.085626 kubelet[2505]: E1101 01:29:38.085566 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:38.085626 kubelet[2505]: W1101 01:29:38.085603 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:38.085888 kubelet[2505]: E1101 01:29:38.085649 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:38.085853 systemd[1]: Started cri-containerd-cc8fb8c2bedf7540c48a3f4de3c6e178c571df758aa2ea0b2d3c4a14841a1655.scope. Nov 1 01:29:38.086346 kubelet[2505]: E1101 01:29:38.086183 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:38.086346 kubelet[2505]: W1101 01:29:38.086212 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:38.086346 kubelet[2505]: E1101 01:29:38.086249 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:38.086952 kubelet[2505]: E1101 01:29:38.086889 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:38.086952 kubelet[2505]: W1101 01:29:38.086927 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:38.087254 kubelet[2505]: E1101 01:29:38.086968 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:38.087481 kubelet[2505]: E1101 01:29:38.087447 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:38.087481 kubelet[2505]: W1101 01:29:38.087476 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:38.087766 kubelet[2505]: E1101 01:29:38.087503 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:38.088053 kubelet[2505]: E1101 01:29:38.088018 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:38.088183 kubelet[2505]: W1101 01:29:38.088054 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:38.088183 kubelet[2505]: E1101 01:29:38.088096 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:38.110842 kubelet[2505]: E1101 01:29:38.110754 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:38.110842 kubelet[2505]: W1101 01:29:38.110798 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:38.110842 kubelet[2505]: E1101 01:29:38.110841 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:38.112000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:38.112000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:38.112000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:38.112000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:38.112000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:38.112000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:38.112000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:38.112000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:38.112000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:38.113000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:38.113000 audit: BPF prog-id=125 op=LOAD Nov 1 01:29:38.114000 audit[3145]: AVC avc: denied { bpf } for pid=3145 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:38.114000 audit[3145]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c0001bdc48 a2=10 a3=1c items=0 ppid=3136 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:38.114000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363386662386332626564663735343063343861336634646533633665 Nov 1 01:29:38.114000 audit[3145]: AVC avc: denied { perfmon } for pid=3145 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:38.114000 audit[3145]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001bd6b0 a2=3c a3=c items=0 ppid=3136 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:38.114000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363386662386332626564663735343063343861336634646533633665 Nov 1 01:29:38.114000 audit[3145]: AVC avc: denied { bpf } for pid=3145 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:38.114000 audit[3145]: AVC avc: denied { bpf } for pid=3145 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:38.114000 audit[3145]: AVC avc: denied { bpf } for pid=3145 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:38.114000 audit[3145]: AVC avc: denied { perfmon } for pid=3145 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:38.114000 audit[3145]: AVC avc: denied { perfmon } for pid=3145 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:38.114000 audit[3145]: AVC avc: denied { perfmon } for pid=3145 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:38.114000 audit[3145]: AVC avc: denied { perfmon } for pid=3145 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:38.114000 audit[3145]: AVC avc: denied { perfmon } for pid=3145 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:38.114000 audit[3145]: AVC avc: denied { bpf } for pid=3145 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:38.114000 audit[3145]: AVC avc: denied { bpf } for pid=3145 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:38.114000 audit: BPF prog-id=126 op=LOAD Nov 1 01:29:38.114000 audit[3145]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001bd9d8 a2=78 a3=c000025080 items=0 ppid=3136 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:38.114000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363386662386332626564663735343063343861336634646533633665 Nov 1 01:29:38.115000 audit[3145]: AVC avc: denied { bpf } for pid=3145 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:38.115000 audit[3145]: AVC avc: denied { bpf } for pid=3145 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:38.115000 audit[3145]: AVC avc: denied { perfmon } for pid=3145 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:38.115000 audit[3145]: AVC avc: denied { perfmon } for pid=3145 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:38.115000 audit[3145]: AVC avc: denied { perfmon } for pid=3145 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:38.115000 audit[3145]: AVC avc: denied { perfmon } for pid=3145 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:38.115000 audit[3145]: AVC avc: denied { perfmon } for pid=3145 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:38.115000 audit[3145]: AVC avc: denied { bpf } for pid=3145 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:38.115000 audit[3145]: AVC avc: denied { bpf } for pid=3145 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:38.115000 audit: BPF prog-id=127 op=LOAD Nov 1 01:29:38.115000 audit[3145]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001bd770 a2=78 a3=c0000250c8 items=0 ppid=3136 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:38.115000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363386662386332626564663735343063343861336634646533633665 Nov 1 01:29:38.115000 audit: BPF prog-id=127 op=UNLOAD Nov 1 01:29:38.115000 audit: BPF prog-id=126 op=UNLOAD Nov 1 01:29:38.115000 audit[3145]: AVC avc: denied { bpf } for pid=3145 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:38.115000 audit[3145]: AVC avc: denied { bpf } for pid=3145 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:38.115000 audit[3145]: AVC avc: denied { bpf } for pid=3145 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:38.115000 audit[3145]: AVC avc: denied { perfmon } for pid=3145 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:38.115000 audit[3145]: AVC avc: denied { perfmon } for pid=3145 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:38.115000 audit[3145]: AVC avc: denied { perfmon } for pid=3145 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:38.115000 audit[3145]: AVC avc: denied { perfmon } for pid=3145 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:38.115000 audit[3145]: AVC avc: denied { perfmon } for pid=3145 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:38.115000 audit[3145]: AVC avc: denied { bpf } for pid=3145 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:38.115000 audit[3145]: AVC avc: denied { bpf } for pid=3145 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:38.115000 audit: BPF prog-id=128 op=LOAD Nov 1 01:29:38.115000 audit[3145]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001bdc30 a2=78 a3=c0000254d8 items=0 ppid=3136 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:38.115000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363386662386332626564663735343063343861336634646533633665 Nov 1 01:29:38.139313 env[1561]: time="2025-11-01T01:29:38.139224369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xr2tp,Uid:aca8b418-b773-475a-b6a2-ccd0ad54031e,Namespace:calico-system,Attempt:0,} returns sandbox id \"cc8fb8c2bedf7540c48a3f4de3c6e178c571df758aa2ea0b2d3c4a14841a1655\"" Nov 1 01:29:38.518000 audit[3199]: NETFILTER_CFG table=filter:97 family=2 entries=22 op=nft_register_rule pid=3199 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:29:38.546464 kernel: kauditd_printk_skb: 133 callbacks suppressed Nov 1 01:29:38.546503 kernel: audit: type=1325 audit(1761960578.518:969): table=filter:97 family=2 entries=22 op=nft_register_rule pid=3199 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:29:38.518000 audit[3199]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff68bff9f0 a2=0 a3=7fff68bff9dc items=0 ppid=2709 pid=3199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:38.704068 kernel: audit: type=1300 audit(1761960578.518:969): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff68bff9f0 a2=0 a3=7fff68bff9dc items=0 ppid=2709 pid=3199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:38.704122 kernel: audit: type=1327 audit(1761960578.518:969): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:29:38.518000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:29:38.765000 audit[3199]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=3199 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:29:38.765000 audit[3199]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff68bff9f0 a2=0 a3=0 items=0 ppid=2709 pid=3199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:38.923062 kernel: audit: type=1325 audit(1761960578.765:970): table=nat:98 family=2 entries=12 op=nft_register_rule pid=3199 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:29:38.923126 kernel: audit: type=1300 audit(1761960578.765:970): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff68bff9f0 a2=0 a3=0 items=0 ppid=2709 pid=3199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:38.923155 kernel: audit: type=1327 audit(1761960578.765:970): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:29:38.765000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:29:39.546074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1129824987.mount: Deactivated successfully. Nov 1 01:29:40.153689 kubelet[2505]: E1101 01:29:40.153648 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9wz7k" podUID="79df0ba2-6e86-422c-8f93-652dfb942b69" Nov 1 01:29:40.466761 env[1561]: time="2025-11-01T01:29:40.466709254Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:40.467370 env[1561]: time="2025-11-01T01:29:40.467326194Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:40.467978 env[1561]: time="2025-11-01T01:29:40.467936200Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:40.468910 env[1561]: time="2025-11-01T01:29:40.468863594Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:40.469074 env[1561]: time="2025-11-01T01:29:40.469031829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 1 01:29:40.469628 env[1561]: time="2025-11-01T01:29:40.469614261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 1 01:29:40.473669 env[1561]: time="2025-11-01T01:29:40.473646172Z" level=info msg="CreateContainer within sandbox \"6c6ac1adf0db4cfaf588fef3bd6bbf25d265f1b884c18839c70a2227b95da270\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 1 01:29:40.478567 env[1561]: time="2025-11-01T01:29:40.478515357Z" level=info msg="CreateContainer within sandbox \"6c6ac1adf0db4cfaf588fef3bd6bbf25d265f1b884c18839c70a2227b95da270\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0ea10b58bb0c605ff8300fd81a7812215eb44f37b9a3332b3d12c86d3f8ab277\"" Nov 1 01:29:40.478861 env[1561]: time="2025-11-01T01:29:40.478795624Z" level=info msg="StartContainer for \"0ea10b58bb0c605ff8300fd81a7812215eb44f37b9a3332b3d12c86d3f8ab277\"" Nov 1 01:29:40.487168 systemd[1]: Started cri-containerd-0ea10b58bb0c605ff8300fd81a7812215eb44f37b9a3332b3d12c86d3f8ab277.scope. Nov 1 01:29:40.492000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:40.492000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:40.622291 kernel: audit: type=1400 audit(1761960580.492:971): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:40.622344 kernel: audit: type=1400 audit(1761960580.492:972): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:40.622361 kernel: audit: type=1400 audit(1761960580.492:973): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:40.492000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:40.686658 kernel: audit: type=1400 audit(1761960580.492:974): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:40.492000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:40.492000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:40.492000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:40.492000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:40.492000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:40.492000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:40.620000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:40.620000 audit: BPF prog-id=129 op=LOAD Nov 1 01:29:40.621000 audit[3211]: AVC avc: denied { bpf } for pid=3211 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:40.621000 audit[3211]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000145c48 a2=10 a3=1c items=0 ppid=3056 pid=3211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:40.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065613130623538626230633630356666383330306664383161373831 Nov 1 01:29:40.621000 audit[3211]: AVC avc: denied { perfmon } for pid=3211 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:40.621000 audit[3211]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001456b0 a2=3c a3=8 items=0 ppid=3056 pid=3211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:40.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065613130623538626230633630356666383330306664383161373831 Nov 1 01:29:40.621000 audit[3211]: AVC avc: denied { bpf } for pid=3211 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:40.621000 audit[3211]: AVC avc: denied { bpf } for pid=3211 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:40.621000 audit[3211]: AVC avc: denied { bpf } for pid=3211 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:40.621000 audit[3211]: AVC avc: denied { perfmon } for pid=3211 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:40.621000 audit[3211]: AVC avc: denied { perfmon } for pid=3211 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:40.621000 audit[3211]: AVC avc: denied { perfmon } for pid=3211 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:40.621000 audit[3211]: AVC avc: denied { perfmon } for pid=3211 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:40.621000 audit[3211]: AVC avc: denied { perfmon } for pid=3211 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:40.621000 audit[3211]: AVC avc: denied { bpf } for pid=3211 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:40.621000 audit[3211]: AVC avc: denied { bpf } for pid=3211 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:40.621000 audit: BPF prog-id=130 op=LOAD Nov 1 01:29:40.621000 audit[3211]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001459d8 a2=78 a3=c00027bdc0 items=0 ppid=3056 pid=3211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:40.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065613130623538626230633630356666383330306664383161373831 Nov 1 01:29:40.749000 audit[3211]: AVC avc: denied { bpf } for pid=3211 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:40.749000 audit[3211]: AVC avc: denied { bpf } for pid=3211 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:40.749000 audit[3211]: AVC avc: denied { perfmon } for pid=3211 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:40.749000 audit[3211]: AVC avc: denied { perfmon } for pid=3211 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:40.749000 audit[3211]: AVC avc: denied { perfmon } for pid=3211 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:40.749000 audit[3211]: AVC avc: denied { perfmon } for pid=3211 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:40.749000 audit[3211]: AVC avc: denied { perfmon } for pid=3211 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:40.749000 audit[3211]: AVC avc: denied { bpf } for pid=3211 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:40.749000 audit[3211]: AVC avc: denied { bpf } for pid=3211 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:40.749000 audit: BPF prog-id=131 op=LOAD Nov 1 01:29:40.749000 audit[3211]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000145770 a2=78 a3=c00027be08 items=0 ppid=3056 pid=3211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:40.749000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065613130623538626230633630356666383330306664383161373831 Nov 1 01:29:40.750000 audit: BPF prog-id=131 op=UNLOAD Nov 1 01:29:40.750000 audit: BPF prog-id=130 op=UNLOAD Nov 1 01:29:40.750000 audit[3211]: AVC avc: denied { bpf } for pid=3211 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:40.750000 audit[3211]: AVC avc: denied { bpf } for pid=3211 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:40.750000 audit[3211]: AVC avc: denied { bpf } for pid=3211 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:40.750000 audit[3211]: AVC avc: denied { perfmon } for pid=3211 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:40.750000 audit[3211]: AVC avc: denied { perfmon } for pid=3211 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:40.750000 audit[3211]: AVC avc: denied { perfmon } for pid=3211 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:40.750000 audit[3211]: AVC avc: denied { perfmon } for pid=3211 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:40.750000 audit[3211]: AVC avc: denied { perfmon } for pid=3211 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:40.750000 audit[3211]: AVC avc: denied { bpf } for pid=3211 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:40.750000 audit[3211]: AVC avc: denied { bpf } for pid=3211 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:40.750000 audit: BPF prog-id=132 op=LOAD Nov 1 01:29:40.750000 audit[3211]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000145c30 a2=78 a3=c0003f8218 items=0 ppid=3056 pid=3211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:40.750000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065613130623538626230633630356666383330306664383161373831 Nov 1 01:29:40.768267 env[1561]: time="2025-11-01T01:29:40.768240929Z" level=info msg="StartContainer for \"0ea10b58bb0c605ff8300fd81a7812215eb44f37b9a3332b3d12c86d3f8ab277\" returns successfully" Nov 1 01:29:41.251089 kubelet[2505]: I1101 01:29:41.250971 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7548fd47d8-bt7pq" podStartSLOduration=1.703890071 podStartE2EDuration="4.250937207s" podCreationTimestamp="2025-11-01 01:29:37 +0000 UTC" firstStartedPulling="2025-11-01 01:29:37.922423883 +0000 UTC m=+17.828236068" lastFinishedPulling="2025-11-01 01:29:40.469471019 +0000 UTC m=+20.375283204" observedRunningTime="2025-11-01 01:29:41.250490674 +0000 UTC m=+21.156302968" watchObservedRunningTime="2025-11-01 01:29:41.250937207 +0000 UTC m=+21.156749448" Nov 1 01:29:41.290454 kubelet[2505]: E1101 01:29:41.290346 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:41.290454 kubelet[2505]: W1101 01:29:41.290420 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:41.290881 kubelet[2505]: E1101 01:29:41.290476 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:41.291130 kubelet[2505]: E1101 01:29:41.291046 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:41.291130 kubelet[2505]: W1101 01:29:41.291084 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:41.291130 kubelet[2505]: E1101 01:29:41.291117 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:41.291815 kubelet[2505]: E1101 01:29:41.291739 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:41.291815 kubelet[2505]: W1101 01:29:41.291777 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:41.291815 kubelet[2505]: E1101 01:29:41.291810 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:41.292548 kubelet[2505]: E1101 01:29:41.292468 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:41.292548 kubelet[2505]: W1101 01:29:41.292498 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:41.292548 kubelet[2505]: E1101 01:29:41.292528 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:41.293194 kubelet[2505]: E1101 01:29:41.293116 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:41.293194 kubelet[2505]: W1101 01:29:41.293153 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:41.293194 kubelet[2505]: E1101 01:29:41.293187 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:41.293841 kubelet[2505]: E1101 01:29:41.293770 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:41.293841 kubelet[2505]: W1101 01:29:41.293799 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:41.293841 kubelet[2505]: E1101 01:29:41.293827 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:41.294364 kubelet[2505]: E1101 01:29:41.294333 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:41.294364 kubelet[2505]: W1101 01:29:41.294362 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:41.294624 kubelet[2505]: E1101 01:29:41.294388 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:41.294989 kubelet[2505]: E1101 01:29:41.294961 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:41.295117 kubelet[2505]: W1101 01:29:41.294989 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:41.295117 kubelet[2505]: E1101 01:29:41.295017 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:41.295617 kubelet[2505]: E1101 01:29:41.295587 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:41.295739 kubelet[2505]: W1101 01:29:41.295616 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:41.295739 kubelet[2505]: E1101 01:29:41.295643 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:41.296175 kubelet[2505]: E1101 01:29:41.296127 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:41.296175 kubelet[2505]: W1101 01:29:41.296153 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:41.296443 kubelet[2505]: E1101 01:29:41.296179 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:41.296753 kubelet[2505]: E1101 01:29:41.296704 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:41.296753 kubelet[2505]: W1101 01:29:41.296731 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:41.296753 kubelet[2505]: E1101 01:29:41.296756 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:41.297314 kubelet[2505]: E1101 01:29:41.297265 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:41.297314 kubelet[2505]: W1101 01:29:41.297292 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:41.297588 kubelet[2505]: E1101 01:29:41.297319 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:41.297919 kubelet[2505]: E1101 01:29:41.297870 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:41.297919 kubelet[2505]: W1101 01:29:41.297897 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:41.297919 kubelet[2505]: E1101 01:29:41.297921 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:41.298471 kubelet[2505]: E1101 01:29:41.298421 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:41.298471 kubelet[2505]: W1101 01:29:41.298449 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:41.298743 kubelet[2505]: E1101 01:29:41.298474 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:41.299036 kubelet[2505]: E1101 01:29:41.298971 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:41.299036 kubelet[2505]: W1101 01:29:41.298998 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:41.299036 kubelet[2505]: E1101 01:29:41.299023 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:41.303492 kubelet[2505]: E1101 01:29:41.303448 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:41.303492 kubelet[2505]: W1101 01:29:41.303479 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:41.303916 kubelet[2505]: E1101 01:29:41.303509 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:41.304182 kubelet[2505]: E1101 01:29:41.304096 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:41.304182 kubelet[2505]: W1101 01:29:41.304132 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:41.304182 kubelet[2505]: E1101 01:29:41.304171 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:41.304866 kubelet[2505]: E1101 01:29:41.304825 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:41.304866 kubelet[2505]: W1101 01:29:41.304863 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:41.305168 kubelet[2505]: E1101 01:29:41.304903 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:41.305492 kubelet[2505]: E1101 01:29:41.305428 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:41.305492 kubelet[2505]: W1101 01:29:41.305458 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:41.305492 kubelet[2505]: E1101 01:29:41.305484 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:41.306106 kubelet[2505]: E1101 01:29:41.305943 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:41.306106 kubelet[2505]: W1101 01:29:41.305968 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:41.306106 kubelet[2505]: E1101 01:29:41.305994 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:41.306538 kubelet[2505]: E1101 01:29:41.306478 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:41.306538 kubelet[2505]: W1101 01:29:41.306504 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:41.306800 kubelet[2505]: E1101 01:29:41.306540 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:41.307063 kubelet[2505]: E1101 01:29:41.307015 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:41.307063 kubelet[2505]: W1101 01:29:41.307046 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:41.307334 kubelet[2505]: E1101 01:29:41.307080 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:41.307647 kubelet[2505]: E1101 01:29:41.307619 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:41.307647 kubelet[2505]: W1101 01:29:41.307645 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:41.307906 kubelet[2505]: E1101 01:29:41.307676 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:41.308193 kubelet[2505]: E1101 01:29:41.308166 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:41.308193 kubelet[2505]: W1101 01:29:41.308191 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:41.308452 kubelet[2505]: E1101 01:29:41.308215 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:41.308732 kubelet[2505]: E1101 01:29:41.308704 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:41.308971 kubelet[2505]: W1101 01:29:41.308732 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:41.308971 kubelet[2505]: E1101 01:29:41.308757 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:41.309274 kubelet[2505]: E1101 01:29:41.309180 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:41.309274 kubelet[2505]: W1101 01:29:41.309205 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:41.309274 kubelet[2505]: E1101 01:29:41.309231 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:41.309786 kubelet[2505]: E1101 01:29:41.309750 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:41.309786 kubelet[2505]: W1101 01:29:41.309777 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:41.310129 kubelet[2505]: E1101 01:29:41.309802 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:41.310325 kubelet[2505]: E1101 01:29:41.310255 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:41.310325 kubelet[2505]: W1101 01:29:41.310288 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:41.310325 kubelet[2505]: E1101 01:29:41.310315 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:41.310883 kubelet[2505]: E1101 01:29:41.310805 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:41.310883 kubelet[2505]: W1101 01:29:41.310837 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:41.310883 kubelet[2505]: E1101 01:29:41.310870 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:41.311487 kubelet[2505]: E1101 01:29:41.311381 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:41.311487 kubelet[2505]: W1101 01:29:41.311436 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:41.311487 kubelet[2505]: E1101 01:29:41.311462 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:41.312233 kubelet[2505]: E1101 01:29:41.312174 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:41.312233 kubelet[2505]: W1101 01:29:41.312219 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:41.312700 kubelet[2505]: E1101 01:29:41.312257 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:41.312914 kubelet[2505]: E1101 01:29:41.312853 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:41.312914 kubelet[2505]: W1101 01:29:41.312881 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:41.312914 kubelet[2505]: E1101 01:29:41.312909 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:41.313657 kubelet[2505]: E1101 01:29:41.313606 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:29:41.313657 kubelet[2505]: W1101 01:29:41.313640 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:29:41.314096 kubelet[2505]: E1101 01:29:41.313672 2505 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:29:42.047019 env[1561]: time="2025-11-01T01:29:42.046962333Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:42.047710 env[1561]: time="2025-11-01T01:29:42.047698138Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:42.048272 env[1561]: time="2025-11-01T01:29:42.048236089Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:42.049239 env[1561]: time="2025-11-01T01:29:42.049227903Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:42.049419 env[1561]: time="2025-11-01T01:29:42.049385878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 1 01:29:42.051061 env[1561]: time="2025-11-01T01:29:42.051048747Z" level=info msg="CreateContainer within sandbox \"cc8fb8c2bedf7540c48a3f4de3c6e178c571df758aa2ea0b2d3c4a14841a1655\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 1 01:29:42.056228 env[1561]: time="2025-11-01T01:29:42.056206859Z" level=info msg="CreateContainer within sandbox \"cc8fb8c2bedf7540c48a3f4de3c6e178c571df758aa2ea0b2d3c4a14841a1655\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"abde6011d831ed6d6fa5754f9d48b48a954ea0682bd230168175bb514a60c4cc\"" Nov 1 01:29:42.056526 env[1561]: time="2025-11-01T01:29:42.056475026Z" level=info msg="StartContainer for \"abde6011d831ed6d6fa5754f9d48b48a954ea0682bd230168175bb514a60c4cc\"" Nov 1 01:29:42.065338 systemd[1]: Started cri-containerd-abde6011d831ed6d6fa5754f9d48b48a954ea0682bd230168175bb514a60c4cc.scope. Nov 1 01:29:42.070000 audit[3294]: AVC avc: denied { perfmon } for pid=3294 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:42.070000 audit[3294]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001bd6b0 a2=3c a3=7fa6bc7132f8 items=0 ppid=3136 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:42.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162646536303131643833316564366436666135373534663964343862 Nov 1 01:29:42.070000 audit[3294]: AVC avc: denied { bpf } for pid=3294 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:42.070000 audit[3294]: AVC avc: denied { bpf } for pid=3294 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:42.070000 audit[3294]: AVC avc: denied { bpf } for pid=3294 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:42.070000 audit[3294]: AVC avc: denied { perfmon } for pid=3294 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:42.070000 audit[3294]: AVC avc: denied { perfmon } for pid=3294 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:42.070000 audit[3294]: AVC avc: denied { perfmon } for pid=3294 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:42.070000 audit[3294]: AVC avc: denied { perfmon } for pid=3294 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:42.070000 audit[3294]: AVC avc: denied { perfmon } for pid=3294 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:42.070000 audit[3294]: AVC avc: denied { bpf } for pid=3294 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:42.070000 audit[3294]: AVC avc: denied { bpf } for pid=3294 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:42.070000 audit: BPF prog-id=133 op=LOAD Nov 1 01:29:42.070000 audit[3294]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001bd9d8 a2=78 a3=c000024738 items=0 ppid=3136 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:42.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162646536303131643833316564366436666135373534663964343862 Nov 1 01:29:42.071000 audit[3294]: AVC avc: denied { bpf } for pid=3294 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:42.071000 audit[3294]: AVC avc: denied { bpf } for pid=3294 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:42.071000 audit[3294]: AVC avc: denied { perfmon } for pid=3294 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:42.071000 audit[3294]: AVC avc: denied { perfmon } for pid=3294 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:42.071000 audit[3294]: AVC avc: denied { perfmon } for pid=3294 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:42.071000 audit[3294]: AVC avc: denied { perfmon } for pid=3294 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:42.071000 audit[3294]: AVC avc: denied { perfmon } for pid=3294 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:42.071000 audit[3294]: AVC avc: denied { bpf } for pid=3294 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:42.071000 audit[3294]: AVC avc: denied { bpf } for pid=3294 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:42.071000 audit: BPF prog-id=134 op=LOAD Nov 1 01:29:42.071000 audit[3294]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c0001bd770 a2=78 a3=c000024788 items=0 ppid=3136 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:42.071000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162646536303131643833316564366436666135373534663964343862 Nov 1 01:29:42.071000 audit: BPF prog-id=134 op=UNLOAD Nov 1 01:29:42.071000 audit: BPF prog-id=133 op=UNLOAD Nov 1 01:29:42.071000 audit[3294]: AVC avc: denied { bpf } for pid=3294 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:42.071000 audit[3294]: AVC avc: denied { bpf } for pid=3294 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:42.071000 audit[3294]: AVC avc: denied { bpf } for pid=3294 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:42.071000 audit[3294]: AVC avc: denied { perfmon } for pid=3294 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:42.071000 audit[3294]: AVC avc: denied { perfmon } for pid=3294 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:42.071000 audit[3294]: AVC avc: denied { perfmon } for pid=3294 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:42.071000 audit[3294]: AVC avc: denied { perfmon } for pid=3294 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:42.071000 audit[3294]: AVC avc: denied { perfmon } for pid=3294 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:42.071000 audit[3294]: AVC avc: denied { bpf } for pid=3294 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:42.071000 audit[3294]: AVC avc: denied { bpf } for pid=3294 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:42.071000 audit: BPF prog-id=135 op=LOAD Nov 1 01:29:42.071000 audit[3294]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001bdc30 a2=78 a3=c000024818 items=0 ppid=3136 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:42.071000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162646536303131643833316564366436666135373534663964343862 Nov 1 01:29:42.077976 env[1561]: time="2025-11-01T01:29:42.077953564Z" level=info msg="StartContainer for \"abde6011d831ed6d6fa5754f9d48b48a954ea0682bd230168175bb514a60c4cc\" returns successfully" Nov 1 01:29:42.083160 systemd[1]: cri-containerd-abde6011d831ed6d6fa5754f9d48b48a954ea0682bd230168175bb514a60c4cc.scope: Deactivated successfully. Nov 1 01:29:42.092000 audit: BPF prog-id=135 op=UNLOAD Nov 1 01:29:42.154024 kubelet[2505]: E1101 01:29:42.153927 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9wz7k" podUID="79df0ba2-6e86-422c-8f93-652dfb942b69" Nov 1 01:29:42.222187 kubelet[2505]: I1101 01:29:42.222122 2505 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 01:29:42.477868 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-abde6011d831ed6d6fa5754f9d48b48a954ea0682bd230168175bb514a60c4cc-rootfs.mount: Deactivated successfully. Nov 1 01:29:42.508637 env[1561]: time="2025-11-01T01:29:42.508528948Z" level=info msg="shim disconnected" id=abde6011d831ed6d6fa5754f9d48b48a954ea0682bd230168175bb514a60c4cc Nov 1 01:29:42.508935 env[1561]: time="2025-11-01T01:29:42.508650273Z" level=warning msg="cleaning up after shim disconnected" id=abde6011d831ed6d6fa5754f9d48b48a954ea0682bd230168175bb514a60c4cc namespace=k8s.io Nov 1 01:29:42.508935 env[1561]: time="2025-11-01T01:29:42.508686515Z" level=info msg="cleaning up dead shim" Nov 1 01:29:42.525819 env[1561]: time="2025-11-01T01:29:42.525689424Z" level=warning msg="cleanup warnings time=\"2025-11-01T01:29:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3336 runtime=io.containerd.runc.v2\n" Nov 1 01:29:43.230739 env[1561]: time="2025-11-01T01:29:43.230663807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 1 01:29:44.154229 kubelet[2505]: E1101 01:29:44.154141 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9wz7k" podUID="79df0ba2-6e86-422c-8f93-652dfb942b69" Nov 1 01:29:46.153317 kubelet[2505]: E1101 01:29:46.153265 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9wz7k" podUID="79df0ba2-6e86-422c-8f93-652dfb942b69" Nov 1 01:29:46.254708 kubelet[2505]: I1101 01:29:46.254675 2505 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 01:29:46.268000 audit[3349]: NETFILTER_CFG table=filter:99 family=2 entries=21 op=nft_register_rule pid=3349 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:29:46.297065 kernel: kauditd_printk_skb: 97 callbacks suppressed Nov 1 01:29:46.297167 kernel: audit: type=1325 audit(1761960586.268:996): table=filter:99 family=2 entries=21 op=nft_register_rule pid=3349 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:29:46.268000 audit[3349]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcfe0f2be0 a2=0 a3=7ffcfe0f2bcc items=0 ppid=2709 pid=3349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:46.357432 kernel: audit: type=1300 audit(1761960586.268:996): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcfe0f2be0 a2=0 a3=7ffcfe0f2bcc items=0 ppid=2709 pid=3349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:46.268000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:29:46.512940 kernel: audit: type=1327 audit(1761960586.268:996): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:29:46.513000 audit[3349]: NETFILTER_CFG table=nat:100 family=2 entries=19 op=nft_register_chain pid=3349 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:29:46.513000 audit[3349]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffcfe0f2be0 a2=0 a3=7ffcfe0f2bcc items=0 ppid=2709 pid=3349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:46.671957 kernel: audit: type=1325 audit(1761960586.513:997): table=nat:100 family=2 entries=19 op=nft_register_chain pid=3349 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:29:46.672018 kernel: audit: type=1300 audit(1761960586.513:997): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffcfe0f2be0 a2=0 a3=7ffcfe0f2bcc items=0 ppid=2709 pid=3349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:46.672035 kernel: audit: type=1327 audit(1761960586.513:997): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:29:46.513000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:29:46.705323 env[1561]: time="2025-11-01T01:29:46.705276144Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:46.706051 env[1561]: time="2025-11-01T01:29:46.705989408Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:46.706885 env[1561]: time="2025-11-01T01:29:46.706837988Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:46.707651 env[1561]: time="2025-11-01T01:29:46.707609327Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:46.707969 env[1561]: time="2025-11-01T01:29:46.707927615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 1 01:29:46.710005 env[1561]: time="2025-11-01T01:29:46.709983762Z" level=info msg="CreateContainer within sandbox \"cc8fb8c2bedf7540c48a3f4de3c6e178c571df758aa2ea0b2d3c4a14841a1655\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 1 01:29:46.730535 env[1561]: time="2025-11-01T01:29:46.730485283Z" level=info msg="CreateContainer within sandbox \"cc8fb8c2bedf7540c48a3f4de3c6e178c571df758aa2ea0b2d3c4a14841a1655\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ef7f003a088d940d656eb3c944849c7336c2c653a9aa4ebf4bf1a65ade6347a7\"" Nov 1 01:29:46.730771 env[1561]: time="2025-11-01T01:29:46.730723572Z" level=info msg="StartContainer for \"ef7f003a088d940d656eb3c944849c7336c2c653a9aa4ebf4bf1a65ade6347a7\"" Nov 1 01:29:46.740769 systemd[1]: Started cri-containerd-ef7f003a088d940d656eb3c944849c7336c2c653a9aa4ebf4bf1a65ade6347a7.scope. Nov 1 01:29:46.746000 audit[3357]: AVC avc: denied { perfmon } for pid=3357 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:46.746000 audit[3357]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001bd6b0 a2=3c a3=7f6f44d7d2c8 items=0 ppid=3136 pid=3357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:46.909398 kernel: audit: type=1400 audit(1761960586.746:998): avc: denied { perfmon } for pid=3357 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:46.909445 kernel: audit: type=1300 audit(1761960586.746:998): arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001bd6b0 a2=3c a3=7f6f44d7d2c8 items=0 ppid=3136 pid=3357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:46.909466 kernel: audit: type=1327 audit(1761960586.746:998): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566376630303361303838643934306436353665623363393434383439 Nov 1 01:29:46.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566376630303361303838643934306436353665623363393434383439 Nov 1 01:29:47.003170 kernel: audit: type=1400 audit(1761960586.746:999): avc: denied { bpf } for pid=3357 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:46.746000 audit[3357]: AVC avc: denied { bpf } for pid=3357 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:46.746000 audit[3357]: AVC avc: denied { bpf } for pid=3357 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:46.746000 audit[3357]: AVC avc: denied { bpf } for pid=3357 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:46.746000 audit[3357]: AVC avc: denied { perfmon } for pid=3357 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:46.746000 audit[3357]: AVC avc: denied { perfmon } for pid=3357 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:46.746000 audit[3357]: AVC avc: denied { perfmon } for pid=3357 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:46.746000 audit[3357]: AVC avc: denied { perfmon } for pid=3357 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:46.746000 audit[3357]: AVC avc: denied { perfmon } for pid=3357 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:46.746000 audit[3357]: AVC avc: denied { bpf } for pid=3357 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:46.746000 audit[3357]: AVC avc: denied { bpf } for pid=3357 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:46.746000 audit: BPF prog-id=136 op=LOAD Nov 1 01:29:46.746000 audit[3357]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001bd9d8 a2=78 a3=c0000f3f68 items=0 ppid=3136 pid=3357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:46.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566376630303361303838643934306436353665623363393434383439 Nov 1 01:29:46.908000 audit[3357]: AVC avc: denied { bpf } for pid=3357 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:46.908000 audit[3357]: AVC avc: denied { bpf } for pid=3357 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:46.908000 audit[3357]: AVC avc: denied { perfmon } for pid=3357 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:46.908000 audit[3357]: AVC avc: denied { perfmon } for pid=3357 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:46.908000 audit[3357]: AVC avc: denied { perfmon } for pid=3357 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:46.908000 audit[3357]: AVC avc: denied { perfmon } for pid=3357 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:46.908000 audit[3357]: AVC avc: denied { perfmon } for pid=3357 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:46.908000 audit[3357]: AVC avc: denied { bpf } for pid=3357 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:46.908000 audit[3357]: AVC avc: denied { bpf } for pid=3357 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:46.908000 audit: BPF prog-id=137 op=LOAD Nov 1 01:29:46.908000 audit[3357]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c0001bd770 a2=78 a3=c0000f3fb8 items=0 ppid=3136 pid=3357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:46.908000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566376630303361303838643934306436353665623363393434383439 Nov 1 01:29:47.066000 audit: BPF prog-id=137 op=UNLOAD Nov 1 01:29:47.066000 audit: BPF prog-id=136 op=UNLOAD Nov 1 01:29:47.066000 audit[3357]: AVC avc: denied { bpf } for pid=3357 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:47.066000 audit[3357]: AVC avc: denied { bpf } for pid=3357 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:47.066000 audit[3357]: AVC avc: denied { bpf } for pid=3357 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:47.066000 audit[3357]: AVC avc: denied { perfmon } for pid=3357 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:47.066000 audit[3357]: AVC avc: denied { perfmon } for pid=3357 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:47.066000 audit[3357]: AVC avc: denied { perfmon } for pid=3357 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:47.066000 audit[3357]: AVC avc: denied { perfmon } for pid=3357 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:47.066000 audit[3357]: AVC avc: denied { perfmon } for pid=3357 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:47.066000 audit[3357]: AVC avc: denied { bpf } for pid=3357 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:47.066000 audit[3357]: AVC avc: denied { bpf } for pid=3357 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:47.066000 audit: BPF prog-id=138 op=LOAD Nov 1 01:29:47.066000 audit[3357]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001bdc30 a2=78 a3=c0003c4048 items=0 ppid=3136 pid=3357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:47.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566376630303361303838643934306436353665623363393434383439 Nov 1 01:29:47.073375 env[1561]: time="2025-11-01T01:29:47.073323167Z" level=info msg="StartContainer for \"ef7f003a088d940d656eb3c944849c7336c2c653a9aa4ebf4bf1a65ade6347a7\" returns successfully" Nov 1 01:29:47.991338 env[1561]: time="2025-11-01T01:29:47.991195084Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 01:29:47.995487 systemd[1]: cri-containerd-ef7f003a088d940d656eb3c944849c7336c2c653a9aa4ebf4bf1a65ade6347a7.scope: Deactivated successfully. Nov 1 01:29:47.995979 systemd[1]: cri-containerd-ef7f003a088d940d656eb3c944849c7336c2c653a9aa4ebf4bf1a65ade6347a7.scope: Consumed 1.508s CPU time. Nov 1 01:29:48.004841 kubelet[2505]: I1101 01:29:48.004773 2505 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 1 01:29:48.012000 audit: BPF prog-id=138 op=UNLOAD Nov 1 01:29:48.036136 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef7f003a088d940d656eb3c944849c7336c2c653a9aa4ebf4bf1a65ade6347a7-rootfs.mount: Deactivated successfully. Nov 1 01:29:48.090117 systemd[1]: Created slice kubepods-burstable-podc8ca38b0_d8b7_4714_8abe_3e911e8eec29.slice. Nov 1 01:29:48.108783 systemd[1]: Created slice kubepods-besteffort-pod593908a5_f718_4b03_b095_540ff204a4bd.slice. Nov 1 01:29:48.147108 systemd[1]: Created slice kubepods-besteffort-podcb08aa02_32db_4371_b5cc_c9a5a7fd22c8.slice. Nov 1 01:29:48.162882 kubelet[2505]: I1101 01:29:48.162792 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8ca38b0-d8b7-4714-8abe-3e911e8eec29-config-volume\") pod \"coredns-66bc5c9577-54j2g\" (UID: \"c8ca38b0-d8b7-4714-8abe-3e911e8eec29\") " pod="kube-system/coredns-66bc5c9577-54j2g" Nov 1 01:29:48.165727 kubelet[2505]: I1101 01:29:48.162942 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/593908a5-f718-4b03-b095-540ff204a4bd-tigera-ca-bundle\") pod \"calico-kube-controllers-5bff9f9fd4-4vmq2\" (UID: \"593908a5-f718-4b03-b095-540ff204a4bd\") " pod="calico-system/calico-kube-controllers-5bff9f9fd4-4vmq2" Nov 1 01:29:48.165727 kubelet[2505]: I1101 01:29:48.163047 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cb08aa02-32db-4371-b5cc-c9a5a7fd22c8-calico-apiserver-certs\") pod \"calico-apiserver-7b6cfc8885-lqwd9\" (UID: \"cb08aa02-32db-4371-b5cc-c9a5a7fd22c8\") " pod="calico-apiserver/calico-apiserver-7b6cfc8885-lqwd9" Nov 1 01:29:48.165727 kubelet[2505]: I1101 01:29:48.163143 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p2vh\" (UniqueName: \"kubernetes.io/projected/c8ca38b0-d8b7-4714-8abe-3e911e8eec29-kube-api-access-2p2vh\") pod \"coredns-66bc5c9577-54j2g\" (UID: \"c8ca38b0-d8b7-4714-8abe-3e911e8eec29\") " pod="kube-system/coredns-66bc5c9577-54j2g" Nov 1 01:29:48.165727 kubelet[2505]: I1101 01:29:48.163375 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8fvd\" (UniqueName: \"kubernetes.io/projected/593908a5-f718-4b03-b095-540ff204a4bd-kube-api-access-t8fvd\") pod \"calico-kube-controllers-5bff9f9fd4-4vmq2\" (UID: \"593908a5-f718-4b03-b095-540ff204a4bd\") " pod="calico-system/calico-kube-controllers-5bff9f9fd4-4vmq2" Nov 1 01:29:48.165727 kubelet[2505]: I1101 01:29:48.163540 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln2xd\" (UniqueName: \"kubernetes.io/projected/cb08aa02-32db-4371-b5cc-c9a5a7fd22c8-kube-api-access-ln2xd\") pod \"calico-apiserver-7b6cfc8885-lqwd9\" (UID: \"cb08aa02-32db-4371-b5cc-c9a5a7fd22c8\") " pod="calico-apiserver/calico-apiserver-7b6cfc8885-lqwd9" Nov 1 01:29:48.198568 systemd[1]: Created slice kubepods-burstable-pod4c87fc13_f2aa_4700_9b99_82cf119d8f7d.slice. Nov 1 01:29:48.209442 systemd[1]: Created slice kubepods-besteffort-pod79df0ba2_6e86_422c_8f93_652dfb942b69.slice. Nov 1 01:29:48.232991 env[1561]: time="2025-11-01T01:29:48.232878150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9wz7k,Uid:79df0ba2-6e86-422c-8f93-652dfb942b69,Namespace:calico-system,Attempt:0,}" Nov 1 01:29:48.243482 systemd[1]: Created slice kubepods-besteffort-pod438a7b01_7b7b_439d_a5c9_a6d4d681a41f.slice. Nov 1 01:29:48.263420 systemd[1]: Created slice kubepods-besteffort-pod9aac0066_097a_4582_8ce8_a3a1ddb41b3d.slice. Nov 1 01:29:48.264140 kubelet[2505]: I1101 01:29:48.264078 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c87fc13-f2aa-4700-9b99-82cf119d8f7d-config-volume\") pod \"coredns-66bc5c9577-qqpf5\" (UID: \"4c87fc13-f2aa-4700-9b99-82cf119d8f7d\") " pod="kube-system/coredns-66bc5c9577-qqpf5" Nov 1 01:29:48.264590 kubelet[2505]: I1101 01:29:48.264161 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aac0066-097a-4582-8ce8-a3a1ddb41b3d-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-6xhxh\" (UID: \"9aac0066-097a-4582-8ce8-a3a1ddb41b3d\") " pod="calico-system/goldmane-7c778bb748-6xhxh" Nov 1 01:29:48.264590 kubelet[2505]: I1101 01:29:48.264267 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/9aac0066-097a-4582-8ce8-a3a1ddb41b3d-goldmane-key-pair\") pod \"goldmane-7c778bb748-6xhxh\" (UID: \"9aac0066-097a-4582-8ce8-a3a1ddb41b3d\") " pod="calico-system/goldmane-7c778bb748-6xhxh" Nov 1 01:29:48.264590 kubelet[2505]: I1101 01:29:48.264370 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/438a7b01-7b7b-439d-a5c9-a6d4d681a41f-calico-apiserver-certs\") pod \"calico-apiserver-7b6cfc8885-rhjss\" (UID: \"438a7b01-7b7b-439d-a5c9-a6d4d681a41f\") " pod="calico-apiserver/calico-apiserver-7b6cfc8885-rhjss" Nov 1 01:29:48.265198 kubelet[2505]: I1101 01:29:48.264601 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5v42\" (UniqueName: \"kubernetes.io/projected/438a7b01-7b7b-439d-a5c9-a6d4d681a41f-kube-api-access-f5v42\") pod \"calico-apiserver-7b6cfc8885-rhjss\" (UID: \"438a7b01-7b7b-439d-a5c9-a6d4d681a41f\") " pod="calico-apiserver/calico-apiserver-7b6cfc8885-rhjss" Nov 1 01:29:48.265198 kubelet[2505]: I1101 01:29:48.264699 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aac0066-097a-4582-8ce8-a3a1ddb41b3d-config\") pod \"goldmane-7c778bb748-6xhxh\" (UID: \"9aac0066-097a-4582-8ce8-a3a1ddb41b3d\") " pod="calico-system/goldmane-7c778bb748-6xhxh" Nov 1 01:29:48.265198 kubelet[2505]: I1101 01:29:48.264844 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twnf5\" (UniqueName: \"kubernetes.io/projected/4c87fc13-f2aa-4700-9b99-82cf119d8f7d-kube-api-access-twnf5\") pod \"coredns-66bc5c9577-qqpf5\" (UID: \"4c87fc13-f2aa-4700-9b99-82cf119d8f7d\") " pod="kube-system/coredns-66bc5c9577-qqpf5" Nov 1 01:29:48.265198 kubelet[2505]: I1101 01:29:48.265073 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt9ws\" (UniqueName: \"kubernetes.io/projected/9aac0066-097a-4582-8ce8-a3a1ddb41b3d-kube-api-access-jt9ws\") pod \"goldmane-7c778bb748-6xhxh\" (UID: \"9aac0066-097a-4582-8ce8-a3a1ddb41b3d\") " pod="calico-system/goldmane-7c778bb748-6xhxh" Nov 1 01:29:48.353102 systemd[1]: Created slice kubepods-besteffort-pod699e288f_cbfb_40ea_bb6d_670640afd205.slice. Nov 1 01:29:48.366114 kubelet[2505]: I1101 01:29:48.366036 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/699e288f-cbfb-40ea-bb6d-670640afd205-whisker-ca-bundle\") pod \"whisker-7875cf6866-bnfcp\" (UID: \"699e288f-cbfb-40ea-bb6d-670640afd205\") " pod="calico-system/whisker-7875cf6866-bnfcp" Nov 1 01:29:48.366489 kubelet[2505]: I1101 01:29:48.366159 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rljcm\" (UniqueName: \"kubernetes.io/projected/699e288f-cbfb-40ea-bb6d-670640afd205-kube-api-access-rljcm\") pod \"whisker-7875cf6866-bnfcp\" (UID: \"699e288f-cbfb-40ea-bb6d-670640afd205\") " pod="calico-system/whisker-7875cf6866-bnfcp" Nov 1 01:29:48.366741 kubelet[2505]: I1101 01:29:48.366642 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/699e288f-cbfb-40ea-bb6d-670640afd205-whisker-backend-key-pair\") pod \"whisker-7875cf6866-bnfcp\" (UID: \"699e288f-cbfb-40ea-bb6d-670640afd205\") " pod="calico-system/whisker-7875cf6866-bnfcp" Nov 1 01:29:48.412176 env[1561]: time="2025-11-01T01:29:48.412090979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-54j2g,Uid:c8ca38b0-d8b7-4714-8abe-3e911e8eec29,Namespace:kube-system,Attempt:0,}" Nov 1 01:29:48.416480 env[1561]: time="2025-11-01T01:29:48.416364108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bff9f9fd4-4vmq2,Uid:593908a5-f718-4b03-b095-540ff204a4bd,Namespace:calico-system,Attempt:0,}" Nov 1 01:29:48.425361 env[1561]: time="2025-11-01T01:29:48.425266861Z" level=info msg="shim disconnected" id=ef7f003a088d940d656eb3c944849c7336c2c653a9aa4ebf4bf1a65ade6347a7 Nov 1 01:29:48.425668 env[1561]: time="2025-11-01T01:29:48.425367539Z" level=warning msg="cleaning up after shim disconnected" id=ef7f003a088d940d656eb3c944849c7336c2c653a9aa4ebf4bf1a65ade6347a7 namespace=k8s.io Nov 1 01:29:48.425668 env[1561]: time="2025-11-01T01:29:48.425421088Z" level=info msg="cleaning up dead shim" Nov 1 01:29:48.443756 env[1561]: time="2025-11-01T01:29:48.443673314Z" level=warning msg="cleanup warnings time=\"2025-11-01T01:29:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3431 runtime=io.containerd.runc.v2\n" Nov 1 01:29:48.453854 env[1561]: time="2025-11-01T01:29:48.453783793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b6cfc8885-lqwd9,Uid:cb08aa02-32db-4371-b5cc-c9a5a7fd22c8,Namespace:calico-apiserver,Attempt:0,}" Nov 1 01:29:48.478068 env[1561]: time="2025-11-01T01:29:48.478023873Z" level=error msg="Failed to destroy network for sandbox \"6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:29:48.478489 env[1561]: time="2025-11-01T01:29:48.478444032Z" level=error msg="Failed to destroy network for sandbox \"43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:29:48.478686 env[1561]: time="2025-11-01T01:29:48.478644765Z" level=error msg="encountered an error cleaning up failed sandbox \"6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:29:48.478744 env[1561]: time="2025-11-01T01:29:48.478714337Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9wz7k,Uid:79df0ba2-6e86-422c-8f93-652dfb942b69,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:29:48.478789 env[1561]: time="2025-11-01T01:29:48.478729730Z" level=error msg="encountered an error cleaning up failed sandbox \"43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:29:48.478833 env[1561]: time="2025-11-01T01:29:48.478787596Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-54j2g,Uid:c8ca38b0-d8b7-4714-8abe-3e911e8eec29,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:29:48.478962 kubelet[2505]: E1101 01:29:48.478930 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:29:48.479023 kubelet[2505]: E1101 01:29:48.478997 2505 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-54j2g" Nov 1 01:29:48.479023 kubelet[2505]: E1101 01:29:48.479017 2505 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-54j2g" Nov 1 01:29:48.479099 kubelet[2505]: E1101 01:29:48.479064 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-54j2g_kube-system(c8ca38b0-d8b7-4714-8abe-3e911e8eec29)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-54j2g_kube-system(c8ca38b0-d8b7-4714-8abe-3e911e8eec29)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-54j2g" podUID="c8ca38b0-d8b7-4714-8abe-3e911e8eec29" Nov 1 01:29:48.479099 kubelet[2505]: E1101 01:29:48.478930 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:29:48.479205 kubelet[2505]: E1101 01:29:48.479105 2505 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9wz7k" Nov 1 01:29:48.479205 kubelet[2505]: E1101 01:29:48.479121 2505 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9wz7k" Nov 1 01:29:48.479205 kubelet[2505]: E1101 01:29:48.479163 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9wz7k_calico-system(79df0ba2-6e86-422c-8f93-652dfb942b69)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9wz7k_calico-system(79df0ba2-6e86-422c-8f93-652dfb942b69)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9wz7k" podUID="79df0ba2-6e86-422c-8f93-652dfb942b69" Nov 1 01:29:48.479890 env[1561]: time="2025-11-01T01:29:48.479867958Z" level=error msg="Failed to destroy network for sandbox \"c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:29:48.480073 env[1561]: time="2025-11-01T01:29:48.480054366Z" level=error msg="encountered an error cleaning up failed sandbox \"c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:29:48.480128 env[1561]: time="2025-11-01T01:29:48.480085789Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bff9f9fd4-4vmq2,Uid:593908a5-f718-4b03-b095-540ff204a4bd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:29:48.480215 kubelet[2505]: E1101 01:29:48.480198 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:29:48.480248 kubelet[2505]: E1101 01:29:48.480225 2505 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5bff9f9fd4-4vmq2" Nov 1 01:29:48.480248 kubelet[2505]: E1101 01:29:48.480237 2505 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5bff9f9fd4-4vmq2" Nov 1 01:29:48.480300 kubelet[2505]: E1101 01:29:48.480268 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5bff9f9fd4-4vmq2_calico-system(593908a5-f718-4b03-b095-540ff204a4bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5bff9f9fd4-4vmq2_calico-system(593908a5-f718-4b03-b095-540ff204a4bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5bff9f9fd4-4vmq2" podUID="593908a5-f718-4b03-b095-540ff204a4bd" Nov 1 01:29:48.491091 env[1561]: time="2025-11-01T01:29:48.491032126Z" level=error msg="Failed to destroy network for sandbox \"4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:29:48.491227 env[1561]: time="2025-11-01T01:29:48.491212768Z" level=error msg="encountered an error cleaning up failed sandbox \"4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:29:48.491254 env[1561]: time="2025-11-01T01:29:48.491242693Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b6cfc8885-lqwd9,Uid:cb08aa02-32db-4371-b5cc-c9a5a7fd22c8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:29:48.491471 kubelet[2505]: E1101 01:29:48.491382 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:29:48.491471 kubelet[2505]: E1101 01:29:48.491447 2505 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b6cfc8885-lqwd9" Nov 1 01:29:48.491471 kubelet[2505]: E1101 01:29:48.491461 2505 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b6cfc8885-lqwd9" Nov 1 01:29:48.491565 kubelet[2505]: E1101 01:29:48.491500 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b6cfc8885-lqwd9_calico-apiserver(cb08aa02-32db-4371-b5cc-c9a5a7fd22c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b6cfc8885-lqwd9_calico-apiserver(cb08aa02-32db-4371-b5cc-c9a5a7fd22c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-lqwd9" podUID="cb08aa02-32db-4371-b5cc-c9a5a7fd22c8" Nov 1 01:29:48.509965 env[1561]: time="2025-11-01T01:29:48.509758799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qqpf5,Uid:4c87fc13-f2aa-4700-9b99-82cf119d8f7d,Namespace:kube-system,Attempt:0,}" Nov 1 01:29:48.551900 env[1561]: time="2025-11-01T01:29:48.551792775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b6cfc8885-rhjss,Uid:438a7b01-7b7b-439d-a5c9-a6d4d681a41f,Namespace:calico-apiserver,Attempt:0,}" Nov 1 01:29:48.569557 env[1561]: time="2025-11-01T01:29:48.569522759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-6xhxh,Uid:9aac0066-097a-4582-8ce8-a3a1ddb41b3d,Namespace:calico-system,Attempt:0,}" Nov 1 01:29:48.588119 env[1561]: time="2025-11-01T01:29:48.588071788Z" level=error msg="Failed to destroy network for sandbox \"72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:29:48.588405 env[1561]: time="2025-11-01T01:29:48.588371037Z" level=error msg="encountered an error cleaning up failed sandbox \"72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:29:48.588483 env[1561]: time="2025-11-01T01:29:48.588429022Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qqpf5,Uid:4c87fc13-f2aa-4700-9b99-82cf119d8f7d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:29:48.588675 kubelet[2505]: E1101 01:29:48.588619 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:29:48.588742 kubelet[2505]: E1101 01:29:48.588675 2505 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-qqpf5" Nov 1 01:29:48.588742 kubelet[2505]: E1101 01:29:48.588703 2505 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-qqpf5" Nov 1 01:29:48.588824 kubelet[2505]: E1101 01:29:48.588757 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-qqpf5_kube-system(4c87fc13-f2aa-4700-9b99-82cf119d8f7d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-qqpf5_kube-system(4c87fc13-f2aa-4700-9b99-82cf119d8f7d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-qqpf5" podUID="4c87fc13-f2aa-4700-9b99-82cf119d8f7d" Nov 1 01:29:48.601869 env[1561]: time="2025-11-01T01:29:48.601826144Z" level=error msg="Failed to destroy network for sandbox \"e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:29:48.602140 env[1561]: time="2025-11-01T01:29:48.602093519Z" level=error msg="encountered an error cleaning up failed sandbox \"e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:29:48.602185 env[1561]: time="2025-11-01T01:29:48.602138673Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b6cfc8885-rhjss,Uid:438a7b01-7b7b-439d-a5c9-a6d4d681a41f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:29:48.602346 kubelet[2505]: E1101 01:29:48.602323 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:29:48.602393 kubelet[2505]: E1101 01:29:48.602363 2505 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b6cfc8885-rhjss" Nov 1 01:29:48.602393 kubelet[2505]: E1101 01:29:48.602382 2505 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b6cfc8885-rhjss" Nov 1 01:29:48.602465 kubelet[2505]: E1101 01:29:48.602439 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b6cfc8885-rhjss_calico-apiserver(438a7b01-7b7b-439d-a5c9-a6d4d681a41f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b6cfc8885-rhjss_calico-apiserver(438a7b01-7b7b-439d-a5c9-a6d4d681a41f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-rhjss" podUID="438a7b01-7b7b-439d-a5c9-a6d4d681a41f" Nov 1 01:29:48.611558 env[1561]: time="2025-11-01T01:29:48.611486589Z" level=error msg="Failed to destroy network for sandbox \"a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:29:48.611775 env[1561]: time="2025-11-01T01:29:48.611728917Z" level=error msg="encountered an error cleaning up failed sandbox \"a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:29:48.611839 env[1561]: time="2025-11-01T01:29:48.611771179Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-6xhxh,Uid:9aac0066-097a-4582-8ce8-a3a1ddb41b3d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:29:48.611991 kubelet[2505]: E1101 01:29:48.611937 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:29:48.611991 kubelet[2505]: E1101 01:29:48.611976 2505 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-6xhxh" Nov 1 01:29:48.612082 kubelet[2505]: E1101 01:29:48.611994 2505 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-6xhxh" Nov 1 01:29:48.612082 kubelet[2505]: E1101 01:29:48.612035 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-6xhxh_calico-system(9aac0066-097a-4582-8ce8-a3a1ddb41b3d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-6xhxh_calico-system(9aac0066-097a-4582-8ce8-a3a1ddb41b3d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-6xhxh" podUID="9aac0066-097a-4582-8ce8-a3a1ddb41b3d" Nov 1 01:29:48.659193 env[1561]: time="2025-11-01T01:29:48.659060783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7875cf6866-bnfcp,Uid:699e288f-cbfb-40ea-bb6d-670640afd205,Namespace:calico-system,Attempt:0,}" Nov 1 01:29:48.748724 env[1561]: time="2025-11-01T01:29:48.748660147Z" level=error msg="Failed to destroy network for sandbox \"17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:29:48.749122 env[1561]: time="2025-11-01T01:29:48.749081499Z" level=error msg="encountered an error cleaning up failed sandbox \"17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:29:48.749226 env[1561]: time="2025-11-01T01:29:48.749149847Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7875cf6866-bnfcp,Uid:699e288f-cbfb-40ea-bb6d-670640afd205,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:29:48.749451 kubelet[2505]: E1101 01:29:48.749391 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:29:48.749549 kubelet[2505]: E1101 01:29:48.749475 2505 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7875cf6866-bnfcp" Nov 1 01:29:48.749549 kubelet[2505]: E1101 01:29:48.749505 2505 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7875cf6866-bnfcp" Nov 1 01:29:48.749668 kubelet[2505]: E1101 01:29:48.749570 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7875cf6866-bnfcp_calico-system(699e288f-cbfb-40ea-bb6d-670640afd205)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7875cf6866-bnfcp_calico-system(699e288f-cbfb-40ea-bb6d-670640afd205)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7875cf6866-bnfcp" podUID="699e288f-cbfb-40ea-bb6d-670640afd205" Nov 1 01:29:49.248293 kubelet[2505]: I1101 01:29:49.248234 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9" Nov 1 01:29:49.249697 env[1561]: time="2025-11-01T01:29:49.249612201Z" level=info msg="StopPodSandbox for \"4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9\"" Nov 1 01:29:49.250480 kubelet[2505]: I1101 01:29:49.250162 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7" Nov 1 01:29:49.251472 env[1561]: time="2025-11-01T01:29:49.251372682Z" level=info msg="StopPodSandbox for \"6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7\"" Nov 1 01:29:49.252487 kubelet[2505]: I1101 01:29:49.252434 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac" Nov 1 01:29:49.253622 env[1561]: time="2025-11-01T01:29:49.253550921Z" level=info msg="StopPodSandbox for \"72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac\"" Nov 1 01:29:49.254969 kubelet[2505]: I1101 01:29:49.254907 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102" Nov 1 01:29:49.256114 env[1561]: time="2025-11-01T01:29:49.256036542Z" level=info msg="StopPodSandbox for \"a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102\"" Nov 1 01:29:49.257779 kubelet[2505]: I1101 01:29:49.257704 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f" Nov 1 01:29:49.258998 env[1561]: time="2025-11-01T01:29:49.258896355Z" level=info msg="StopPodSandbox for \"e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f\"" Nov 1 01:29:49.260896 kubelet[2505]: I1101 01:29:49.260782 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a" Nov 1 01:29:49.261758 env[1561]: time="2025-11-01T01:29:49.261734490Z" level=info msg="StopPodSandbox for \"c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a\"" Nov 1 01:29:49.261993 kubelet[2505]: I1101 01:29:49.261977 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c" Nov 1 01:29:49.262447 env[1561]: time="2025-11-01T01:29:49.262421137Z" level=info msg="StopPodSandbox for \"17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c\"" Nov 1 01:29:49.262825 kubelet[2505]: I1101 01:29:49.262802 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e" Nov 1 01:29:49.263317 env[1561]: time="2025-11-01T01:29:49.263290570Z" level=info msg="StopPodSandbox for \"43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e\"" Nov 1 01:29:49.265593 env[1561]: time="2025-11-01T01:29:49.265552041Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 1 01:29:49.275941 env[1561]: time="2025-11-01T01:29:49.275897870Z" level=error msg="StopPodSandbox for \"6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7\" failed" error="failed to destroy network for sandbox \"6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:29:49.276109 kubelet[2505]: E1101 01:29:49.276083 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7" Nov 1 01:29:49.276158 kubelet[2505]: E1101 01:29:49.276127 2505 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7"} Nov 1 01:29:49.276195 kubelet[2505]: E1101 01:29:49.276171 2505 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"79df0ba2-6e86-422c-8f93-652dfb942b69\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:29:49.276256 kubelet[2505]: E1101 01:29:49.276197 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"79df0ba2-6e86-422c-8f93-652dfb942b69\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9wz7k" podUID="79df0ba2-6e86-422c-8f93-652dfb942b69" Nov 1 01:29:49.276547 env[1561]: time="2025-11-01T01:29:49.276518834Z" level=error msg="StopPodSandbox for \"72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac\" failed" error="failed to destroy network for sandbox \"72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:29:49.276748 env[1561]: time="2025-11-01T01:29:49.276613789Z" level=error msg="StopPodSandbox for \"c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a\" failed" error="failed to destroy network for sandbox \"c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:29:49.276794 kubelet[2505]: E1101 01:29:49.276639 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac" Nov 1 01:29:49.276794 kubelet[2505]: E1101 01:29:49.276670 2505 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac"} Nov 1 01:29:49.276794 kubelet[2505]: E1101 01:29:49.276693 2505 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4c87fc13-f2aa-4700-9b99-82cf119d8f7d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:29:49.276794 kubelet[2505]: E1101 01:29:49.276721 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4c87fc13-f2aa-4700-9b99-82cf119d8f7d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-qqpf5" podUID="4c87fc13-f2aa-4700-9b99-82cf119d8f7d" Nov 1 01:29:49.276970 kubelet[2505]: E1101 01:29:49.276735 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a" Nov 1 01:29:49.276970 kubelet[2505]: E1101 01:29:49.276766 2505 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a"} Nov 1 01:29:49.276970 kubelet[2505]: E1101 01:29:49.276791 2505 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"593908a5-f718-4b03-b095-540ff204a4bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:29:49.276970 kubelet[2505]: E1101 01:29:49.276811 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"593908a5-f718-4b03-b095-540ff204a4bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5bff9f9fd4-4vmq2" podUID="593908a5-f718-4b03-b095-540ff204a4bd" Nov 1 01:29:49.277471 env[1561]: time="2025-11-01T01:29:49.277419426Z" level=error msg="StopPodSandbox for \"e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f\" failed" error="failed to destroy network for sandbox \"e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:29:49.277651 kubelet[2505]: E1101 01:29:49.277625 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f" Nov 1 01:29:49.277698 kubelet[2505]: E1101 01:29:49.277657 2505 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f"} Nov 1 01:29:49.277748 kubelet[2505]: E1101 01:29:49.277697 2505 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"438a7b01-7b7b-439d-a5c9-a6d4d681a41f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:29:49.277748 kubelet[2505]: E1101 01:29:49.277719 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"438a7b01-7b7b-439d-a5c9-a6d4d681a41f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-rhjss" podUID="438a7b01-7b7b-439d-a5c9-a6d4d681a41f" Nov 1 01:29:49.278929 env[1561]: time="2025-11-01T01:29:49.278890746Z" level=error msg="StopPodSandbox for \"a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102\" failed" error="failed to destroy network for sandbox \"a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:29:49.279259 kubelet[2505]: E1101 01:29:49.279240 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102" Nov 1 01:29:49.279300 kubelet[2505]: E1101 01:29:49.279265 2505 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102"} Nov 1 01:29:49.279300 kubelet[2505]: E1101 01:29:49.279284 2505 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9aac0066-097a-4582-8ce8-a3a1ddb41b3d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:29:49.279367 kubelet[2505]: E1101 01:29:49.279300 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9aac0066-097a-4582-8ce8-a3a1ddb41b3d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-6xhxh" podUID="9aac0066-097a-4582-8ce8-a3a1ddb41b3d" Nov 1 01:29:49.279422 env[1561]: time="2025-11-01T01:29:49.279372137Z" level=error msg="StopPodSandbox for \"43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e\" failed" error="failed to destroy network for sandbox \"43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:29:49.279487 kubelet[2505]: E1101 01:29:49.279472 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e" Nov 1 01:29:49.279523 kubelet[2505]: E1101 01:29:49.279490 2505 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e"} Nov 1 01:29:49.279523 kubelet[2505]: E1101 01:29:49.279504 2505 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c8ca38b0-d8b7-4714-8abe-3e911e8eec29\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:29:49.279523 kubelet[2505]: E1101 01:29:49.279514 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c8ca38b0-d8b7-4714-8abe-3e911e8eec29\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-54j2g" podUID="c8ca38b0-d8b7-4714-8abe-3e911e8eec29" Nov 1 01:29:49.280375 env[1561]: time="2025-11-01T01:29:49.280357214Z" level=error msg="StopPodSandbox for \"4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9\" failed" error="failed to destroy network for sandbox \"4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:29:49.280458 kubelet[2505]: E1101 01:29:49.280440 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9" Nov 1 01:29:49.280503 kubelet[2505]: E1101 01:29:49.280461 2505 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9"} Nov 1 01:29:49.280503 kubelet[2505]: E1101 01:29:49.280484 2505 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cb08aa02-32db-4371-b5cc-c9a5a7fd22c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:29:49.280572 kubelet[2505]: E1101 01:29:49.280503 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cb08aa02-32db-4371-b5cc-c9a5a7fd22c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-lqwd9" podUID="cb08aa02-32db-4371-b5cc-c9a5a7fd22c8" Nov 1 01:29:49.281663 env[1561]: time="2025-11-01T01:29:49.281627221Z" level=error msg="StopPodSandbox for \"17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c\" failed" error="failed to destroy network for sandbox \"17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:29:49.281810 kubelet[2505]: E1101 01:29:49.281772 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c" Nov 1 01:29:49.281891 kubelet[2505]: E1101 01:29:49.281813 2505 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c"} Nov 1 01:29:49.281891 kubelet[2505]: E1101 01:29:49.281858 2505 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"699e288f-cbfb-40ea-bb6d-670640afd205\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:29:49.281891 kubelet[2505]: E1101 01:29:49.281885 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"699e288f-cbfb-40ea-bb6d-670640afd205\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7875cf6866-bnfcp" podUID="699e288f-cbfb-40ea-bb6d-670640afd205" Nov 1 01:29:56.741081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount894788984.mount: Deactivated successfully. Nov 1 01:29:56.757743 env[1561]: time="2025-11-01T01:29:56.757720383Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:56.758320 env[1561]: time="2025-11-01T01:29:56.758309456Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:56.758979 env[1561]: time="2025-11-01T01:29:56.758964360Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:56.759625 env[1561]: time="2025-11-01T01:29:56.759614010Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:29:56.759925 env[1561]: time="2025-11-01T01:29:56.759912447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 1 01:29:56.765030 env[1561]: time="2025-11-01T01:29:56.765010342Z" level=info msg="CreateContainer within sandbox \"cc8fb8c2bedf7540c48a3f4de3c6e178c571df758aa2ea0b2d3c4a14841a1655\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 1 01:29:56.770299 env[1561]: time="2025-11-01T01:29:56.770282655Z" level=info msg="CreateContainer within sandbox \"cc8fb8c2bedf7540c48a3f4de3c6e178c571df758aa2ea0b2d3c4a14841a1655\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"89ba67833e94ac5a04edbbdf319e528181dffa1f4346b627ec4b4f15c0c5fba1\"" Nov 1 01:29:56.770641 env[1561]: time="2025-11-01T01:29:56.770628293Z" level=info msg="StartContainer for \"89ba67833e94ac5a04edbbdf319e528181dffa1f4346b627ec4b4f15c0c5fba1\"" Nov 1 01:29:56.778996 systemd[1]: Started cri-containerd-89ba67833e94ac5a04edbbdf319e528181dffa1f4346b627ec4b4f15c0c5fba1.scope. Nov 1 01:29:56.787000 audit[3993]: AVC avc: denied { perfmon } for pid=3993 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:56.815599 kernel: kauditd_printk_skb: 40 callbacks suppressed Nov 1 01:29:56.815646 kernel: audit: type=1400 audit(1761960596.787:1005): avc: denied { perfmon } for pid=3993 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:56.787000 audit[3993]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=7fa798ee7d58 items=0 ppid=3136 pid=3993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:56.977181 kernel: audit: type=1300 audit(1761960596.787:1005): arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=7fa798ee7d58 items=0 ppid=3136 pid=3993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:56.977211 kernel: audit: type=1327 audit(1761960596.787:1005): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839626136373833336539346163356130346564626264663331396535 Nov 1 01:29:56.787000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839626136373833336539346163356130346564626264663331396535 Nov 1 01:29:57.070582 kernel: audit: type=1400 audit(1761960596.787:1006): avc: denied { bpf } for pid=3993 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:56.787000 audit[3993]: AVC avc: denied { bpf } for pid=3993 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:57.134224 kernel: audit: type=1400 audit(1761960596.787:1006): avc: denied { bpf } for pid=3993 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:56.787000 audit[3993]: AVC avc: denied { bpf } for pid=3993 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:57.197760 kernel: audit: type=1400 audit(1761960596.787:1006): avc: denied { bpf } for pid=3993 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:56.787000 audit[3993]: AVC avc: denied { bpf } for pid=3993 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:57.261346 kernel: audit: type=1400 audit(1761960596.787:1006): avc: denied { perfmon } for pid=3993 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:56.787000 audit[3993]: AVC avc: denied { perfmon } for pid=3993 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:57.266917 env[1561]: time="2025-11-01T01:29:57.266892619Z" level=info msg="StartContainer for \"89ba67833e94ac5a04edbbdf319e528181dffa1f4346b627ec4b4f15c0c5fba1\" returns successfully" Nov 1 01:29:57.304588 kubelet[2505]: I1101 01:29:57.304546 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xr2tp" podStartSLOduration=1.685793607 podStartE2EDuration="20.304534745s" podCreationTimestamp="2025-11-01 01:29:37 +0000 UTC" firstStartedPulling="2025-11-01 01:29:38.141625591 +0000 UTC m=+18.047437837" lastFinishedPulling="2025-11-01 01:29:56.76036679 +0000 UTC m=+36.666178975" observedRunningTime="2025-11-01 01:29:57.304076514 +0000 UTC m=+37.209888698" watchObservedRunningTime="2025-11-01 01:29:57.304534745 +0000 UTC m=+37.210346929" Nov 1 01:29:57.325373 kernel: audit: type=1400 audit(1761960596.787:1006): avc: denied { perfmon } for pid=3993 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:56.787000 audit[3993]: AVC avc: denied { perfmon } for pid=3993 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:57.389473 kernel: audit: type=1400 audit(1761960596.787:1006): avc: denied { perfmon } for pid=3993 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:56.787000 audit[3993]: AVC avc: denied { perfmon } for pid=3993 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:57.453720 kernel: audit: type=1400 audit(1761960596.787:1006): avc: denied { perfmon } for pid=3993 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:56.787000 audit[3993]: AVC avc: denied { perfmon } for pid=3993 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:56.787000 audit[3993]: AVC avc: denied { perfmon } for pid=3993 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:56.787000 audit[3993]: AVC avc: denied { bpf } for pid=3993 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:56.787000 audit[3993]: AVC avc: denied { bpf } for pid=3993 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:56.787000 audit: BPF prog-id=139 op=LOAD Nov 1 01:29:56.787000 audit[3993]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001979d8 a2=78 a3=c0002f3c58 items=0 ppid=3136 pid=3993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:56.787000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839626136373833336539346163356130346564626264663331396535 Nov 1 01:29:56.878000 audit[3993]: AVC avc: denied { bpf } for pid=3993 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:56.878000 audit[3993]: AVC avc: denied { bpf } for pid=3993 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:56.878000 audit[3993]: AVC avc: denied { perfmon } for pid=3993 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:56.878000 audit[3993]: AVC avc: denied { perfmon } for pid=3993 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:56.878000 audit[3993]: AVC avc: denied { perfmon } for pid=3993 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:56.878000 audit[3993]: AVC avc: denied { perfmon } for pid=3993 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:56.878000 audit[3993]: AVC avc: denied { perfmon } for pid=3993 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:56.878000 audit[3993]: AVC avc: denied { bpf } for pid=3993 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:56.878000 audit[3993]: AVC avc: denied { bpf } for pid=3993 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:56.878000 audit: BPF prog-id=140 op=LOAD Nov 1 01:29:56.878000 audit[3993]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000197770 a2=78 a3=c0002f3ca8 items=0 ppid=3136 pid=3993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:56.878000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839626136373833336539346163356130346564626264663331396535 Nov 1 01:29:57.069000 audit: BPF prog-id=140 op=UNLOAD Nov 1 01:29:57.069000 audit: BPF prog-id=139 op=UNLOAD Nov 1 01:29:57.069000 audit[3993]: AVC avc: denied { bpf } for pid=3993 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:57.069000 audit[3993]: AVC avc: denied { bpf } for pid=3993 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:57.069000 audit[3993]: AVC avc: denied { bpf } for pid=3993 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:57.069000 audit[3993]: AVC avc: denied { perfmon } for pid=3993 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:57.069000 audit[3993]: AVC avc: denied { perfmon } for pid=3993 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:57.069000 audit[3993]: AVC avc: denied { perfmon } for pid=3993 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:57.069000 audit[3993]: AVC avc: denied { perfmon } for pid=3993 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:57.069000 audit[3993]: AVC avc: denied { perfmon } for pid=3993 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:57.069000 audit[3993]: AVC avc: denied { bpf } for pid=3993 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:57.069000 audit[3993]: AVC avc: denied { bpf } for pid=3993 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:57.069000 audit: BPF prog-id=141 op=LOAD Nov 1 01:29:57.069000 audit[3993]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000197c30 a2=78 a3=c0002f3d38 items=0 ppid=3136 pid=3993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:57.069000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839626136373833336539346163356130346564626264663331396535 Nov 1 01:29:57.602035 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 1 01:29:57.602073 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 1 01:29:57.652218 env[1561]: time="2025-11-01T01:29:57.652187728Z" level=info msg="StopPodSandbox for \"17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c\"" Nov 1 01:29:57.703605 env[1561]: 2025-11-01 01:29:57.686 [INFO][4065] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c" Nov 1 01:29:57.703605 env[1561]: 2025-11-01 01:29:57.686 [INFO][4065] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c" iface="eth0" netns="/var/run/netns/cni-64ce9179-5578-88a1-7861-0bfc14db7f39" Nov 1 01:29:57.703605 env[1561]: 2025-11-01 01:29:57.686 [INFO][4065] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c" iface="eth0" netns="/var/run/netns/cni-64ce9179-5578-88a1-7861-0bfc14db7f39" Nov 1 01:29:57.703605 env[1561]: 2025-11-01 01:29:57.686 [INFO][4065] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c" iface="eth0" netns="/var/run/netns/cni-64ce9179-5578-88a1-7861-0bfc14db7f39" Nov 1 01:29:57.703605 env[1561]: 2025-11-01 01:29:57.686 [INFO][4065] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c" Nov 1 01:29:57.703605 env[1561]: 2025-11-01 01:29:57.686 [INFO][4065] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c" Nov 1 01:29:57.703605 env[1561]: 2025-11-01 01:29:57.696 [INFO][4082] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c" HandleID="k8s-pod-network.17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c" Workload="ci--3510.3.8--n--34cd8b9336-k8s-whisker--7875cf6866--bnfcp-eth0" Nov 1 01:29:57.703605 env[1561]: 2025-11-01 01:29:57.696 [INFO][4082] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:29:57.703605 env[1561]: 2025-11-01 01:29:57.696 [INFO][4082] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:29:57.703605 env[1561]: 2025-11-01 01:29:57.700 [WARNING][4082] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c" HandleID="k8s-pod-network.17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c" Workload="ci--3510.3.8--n--34cd8b9336-k8s-whisker--7875cf6866--bnfcp-eth0" Nov 1 01:29:57.703605 env[1561]: 2025-11-01 01:29:57.700 [INFO][4082] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c" HandleID="k8s-pod-network.17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c" Workload="ci--3510.3.8--n--34cd8b9336-k8s-whisker--7875cf6866--bnfcp-eth0" Nov 1 01:29:57.703605 env[1561]: 2025-11-01 01:29:57.701 [INFO][4082] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:29:57.703605 env[1561]: 2025-11-01 01:29:57.702 [INFO][4065] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c" Nov 1 01:29:57.703906 env[1561]: time="2025-11-01T01:29:57.703650102Z" level=info msg="TearDown network for sandbox \"17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c\" successfully" Nov 1 01:29:57.703906 env[1561]: time="2025-11-01T01:29:57.703671117Z" level=info msg="StopPodSandbox for \"17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c\" returns successfully" Nov 1 01:29:57.733633 kubelet[2505]: I1101 01:29:57.733619 2505 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/699e288f-cbfb-40ea-bb6d-670640afd205-whisker-ca-bundle\") pod \"699e288f-cbfb-40ea-bb6d-670640afd205\" (UID: \"699e288f-cbfb-40ea-bb6d-670640afd205\") " Nov 1 01:29:57.733699 kubelet[2505]: I1101 01:29:57.733643 2505 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rljcm\" (UniqueName: \"kubernetes.io/projected/699e288f-cbfb-40ea-bb6d-670640afd205-kube-api-access-rljcm\") pod \"699e288f-cbfb-40ea-bb6d-670640afd205\" (UID: \"699e288f-cbfb-40ea-bb6d-670640afd205\") " Nov 1 01:29:57.733699 kubelet[2505]: I1101 01:29:57.733659 2505 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/699e288f-cbfb-40ea-bb6d-670640afd205-whisker-backend-key-pair\") pod \"699e288f-cbfb-40ea-bb6d-670640afd205\" (UID: \"699e288f-cbfb-40ea-bb6d-670640afd205\") " Nov 1 01:29:57.733905 kubelet[2505]: I1101 01:29:57.733853 2505 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/699e288f-cbfb-40ea-bb6d-670640afd205-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "699e288f-cbfb-40ea-bb6d-670640afd205" (UID: "699e288f-cbfb-40ea-bb6d-670640afd205"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 01:29:57.735494 kubelet[2505]: I1101 01:29:57.735446 2505 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/699e288f-cbfb-40ea-bb6d-670640afd205-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "699e288f-cbfb-40ea-bb6d-670640afd205" (UID: "699e288f-cbfb-40ea-bb6d-670640afd205"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 01:29:57.735494 kubelet[2505]: I1101 01:29:57.735476 2505 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/699e288f-cbfb-40ea-bb6d-670640afd205-kube-api-access-rljcm" (OuterVolumeSpecName: "kube-api-access-rljcm") pod "699e288f-cbfb-40ea-bb6d-670640afd205" (UID: "699e288f-cbfb-40ea-bb6d-670640afd205"). InnerVolumeSpecName "kube-api-access-rljcm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 01:29:57.742031 systemd[1]: run-netns-cni\x2d64ce9179\x2d5578\x2d88a1\x2d7861\x2d0bfc14db7f39.mount: Deactivated successfully. Nov 1 01:29:57.742088 systemd[1]: var-lib-kubelet-pods-699e288f\x2dcbfb\x2d40ea\x2dbb6d\x2d670640afd205-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drljcm.mount: Deactivated successfully. Nov 1 01:29:57.742135 systemd[1]: var-lib-kubelet-pods-699e288f\x2dcbfb\x2d40ea\x2dbb6d\x2d670640afd205-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 1 01:29:57.834824 kubelet[2505]: I1101 01:29:57.834752 2505 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rljcm\" (UniqueName: \"kubernetes.io/projected/699e288f-cbfb-40ea-bb6d-670640afd205-kube-api-access-rljcm\") on node \"ci-3510.3.8-n-34cd8b9336\" DevicePath \"\"" Nov 1 01:29:57.834824 kubelet[2505]: I1101 01:29:57.834816 2505 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/699e288f-cbfb-40ea-bb6d-670640afd205-whisker-backend-key-pair\") on node \"ci-3510.3.8-n-34cd8b9336\" DevicePath \"\"" Nov 1 01:29:57.835203 kubelet[2505]: I1101 01:29:57.834843 2505 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/699e288f-cbfb-40ea-bb6d-670640afd205-whisker-ca-bundle\") on node \"ci-3510.3.8-n-34cd8b9336\" DevicePath \"\"" Nov 1 01:29:58.168836 systemd[1]: Removed slice kubepods-besteffort-pod699e288f_cbfb_40ea_bb6d_670640afd205.slice. Nov 1 01:29:58.382781 systemd[1]: Created slice kubepods-besteffort-pod2743a542_7119_47a8_937d_fec5c85bdcf2.slice. Nov 1 01:29:58.440315 kubelet[2505]: I1101 01:29:58.440121 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz628\" (UniqueName: \"kubernetes.io/projected/2743a542-7119-47a8-937d-fec5c85bdcf2-kube-api-access-tz628\") pod \"whisker-649c6d6f48-6pq8q\" (UID: \"2743a542-7119-47a8-937d-fec5c85bdcf2\") " pod="calico-system/whisker-649c6d6f48-6pq8q" Nov 1 01:29:58.441091 kubelet[2505]: I1101 01:29:58.440304 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2743a542-7119-47a8-937d-fec5c85bdcf2-whisker-ca-bundle\") pod \"whisker-649c6d6f48-6pq8q\" (UID: \"2743a542-7119-47a8-937d-fec5c85bdcf2\") " pod="calico-system/whisker-649c6d6f48-6pq8q" Nov 1 01:29:58.441091 kubelet[2505]: I1101 01:29:58.440492 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2743a542-7119-47a8-937d-fec5c85bdcf2-whisker-backend-key-pair\") pod \"whisker-649c6d6f48-6pq8q\" (UID: \"2743a542-7119-47a8-937d-fec5c85bdcf2\") " pod="calico-system/whisker-649c6d6f48-6pq8q" Nov 1 01:29:58.691277 env[1561]: time="2025-11-01T01:29:58.691051192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-649c6d6f48-6pq8q,Uid:2743a542-7119-47a8-937d-fec5c85bdcf2,Namespace:calico-system,Attempt:0,}" Nov 1 01:29:58.837454 systemd-networkd[1315]: cali3934cc389b1: Link UP Nov 1 01:29:58.897980 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 01:29:58.898019 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali3934cc389b1: link becomes ready Nov 1 01:29:58.898006 systemd-networkd[1315]: cali3934cc389b1: Gained carrier Nov 1 01:29:58.906028 env[1561]: 2025-11-01 01:29:58.747 [INFO][4114] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 01:29:58.906028 env[1561]: 2025-11-01 01:29:58.759 [INFO][4114] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--34cd8b9336-k8s-whisker--649c6d6f48--6pq8q-eth0 whisker-649c6d6f48- calico-system 2743a542-7119-47a8-937d-fec5c85bdcf2 887 0 2025-11-01 01:29:58 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:649c6d6f48 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-3510.3.8-n-34cd8b9336 whisker-649c6d6f48-6pq8q eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali3934cc389b1 [] [] }} ContainerID="f93aa9316f8bb6aa68f40c9e1c4aa5c538fa9ef7a7ae63ba5276472b3c1716f0" Namespace="calico-system" Pod="whisker-649c6d6f48-6pq8q" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-whisker--649c6d6f48--6pq8q-" Nov 1 01:29:58.906028 env[1561]: 2025-11-01 01:29:58.760 [INFO][4114] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f93aa9316f8bb6aa68f40c9e1c4aa5c538fa9ef7a7ae63ba5276472b3c1716f0" Namespace="calico-system" Pod="whisker-649c6d6f48-6pq8q" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-whisker--649c6d6f48--6pq8q-eth0" Nov 1 01:29:58.906028 env[1561]: 2025-11-01 01:29:58.791 [INFO][4136] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f93aa9316f8bb6aa68f40c9e1c4aa5c538fa9ef7a7ae63ba5276472b3c1716f0" HandleID="k8s-pod-network.f93aa9316f8bb6aa68f40c9e1c4aa5c538fa9ef7a7ae63ba5276472b3c1716f0" Workload="ci--3510.3.8--n--34cd8b9336-k8s-whisker--649c6d6f48--6pq8q-eth0" Nov 1 01:29:58.906028 env[1561]: 2025-11-01 01:29:58.791 [INFO][4136] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f93aa9316f8bb6aa68f40c9e1c4aa5c538fa9ef7a7ae63ba5276472b3c1716f0" HandleID="k8s-pod-network.f93aa9316f8bb6aa68f40c9e1c4aa5c538fa9ef7a7ae63ba5276472b3c1716f0" Workload="ci--3510.3.8--n--34cd8b9336-k8s-whisker--649c6d6f48--6pq8q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dcc60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-34cd8b9336", "pod":"whisker-649c6d6f48-6pq8q", "timestamp":"2025-11-01 01:29:58.791317439 +0000 UTC"}, Hostname:"ci-3510.3.8-n-34cd8b9336", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:29:58.906028 env[1561]: 2025-11-01 01:29:58.791 [INFO][4136] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:29:58.906028 env[1561]: 2025-11-01 01:29:58.791 [INFO][4136] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:29:58.906028 env[1561]: 2025-11-01 01:29:58.791 [INFO][4136] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-34cd8b9336' Nov 1 01:29:58.906028 env[1561]: 2025-11-01 01:29:58.799 [INFO][4136] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f93aa9316f8bb6aa68f40c9e1c4aa5c538fa9ef7a7ae63ba5276472b3c1716f0" host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:58.906028 env[1561]: 2025-11-01 01:29:58.803 [INFO][4136] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:58.906028 env[1561]: 2025-11-01 01:29:58.807 [INFO][4136] ipam/ipam.go 511: Trying affinity for 192.168.114.0/26 host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:58.906028 env[1561]: 2025-11-01 01:29:58.809 [INFO][4136] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.0/26 host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:58.906028 env[1561]: 2025-11-01 01:29:58.811 [INFO][4136] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:58.906028 env[1561]: 2025-11-01 01:29:58.811 [INFO][4136] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.f93aa9316f8bb6aa68f40c9e1c4aa5c538fa9ef7a7ae63ba5276472b3c1716f0" host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:58.906028 env[1561]: 2025-11-01 01:29:58.813 [INFO][4136] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f93aa9316f8bb6aa68f40c9e1c4aa5c538fa9ef7a7ae63ba5276472b3c1716f0 Nov 1 01:29:58.906028 env[1561]: 2025-11-01 01:29:58.817 [INFO][4136] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.f93aa9316f8bb6aa68f40c9e1c4aa5c538fa9ef7a7ae63ba5276472b3c1716f0" host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:58.906028 env[1561]: 2025-11-01 01:29:58.823 [INFO][4136] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.1/26] block=192.168.114.0/26 handle="k8s-pod-network.f93aa9316f8bb6aa68f40c9e1c4aa5c538fa9ef7a7ae63ba5276472b3c1716f0" host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:58.906028 env[1561]: 2025-11-01 01:29:58.823 [INFO][4136] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.1/26] handle="k8s-pod-network.f93aa9316f8bb6aa68f40c9e1c4aa5c538fa9ef7a7ae63ba5276472b3c1716f0" host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:29:58.906028 env[1561]: 2025-11-01 01:29:58.823 [INFO][4136] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:29:58.906028 env[1561]: 2025-11-01 01:29:58.823 [INFO][4136] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.1/26] IPv6=[] ContainerID="f93aa9316f8bb6aa68f40c9e1c4aa5c538fa9ef7a7ae63ba5276472b3c1716f0" HandleID="k8s-pod-network.f93aa9316f8bb6aa68f40c9e1c4aa5c538fa9ef7a7ae63ba5276472b3c1716f0" Workload="ci--3510.3.8--n--34cd8b9336-k8s-whisker--649c6d6f48--6pq8q-eth0" Nov 1 01:29:58.906522 env[1561]: 2025-11-01 01:29:58.825 [INFO][4114] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f93aa9316f8bb6aa68f40c9e1c4aa5c538fa9ef7a7ae63ba5276472b3c1716f0" Namespace="calico-system" Pod="whisker-649c6d6f48-6pq8q" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-whisker--649c6d6f48--6pq8q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34cd8b9336-k8s-whisker--649c6d6f48--6pq8q-eth0", GenerateName:"whisker-649c6d6f48-", Namespace:"calico-system", SelfLink:"", UID:"2743a542-7119-47a8-937d-fec5c85bdcf2", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 29, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"649c6d6f48", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34cd8b9336", ContainerID:"", Pod:"whisker-649c6d6f48-6pq8q", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.114.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3934cc389b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:29:58.906522 env[1561]: 2025-11-01 01:29:58.825 [INFO][4114] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.1/32] ContainerID="f93aa9316f8bb6aa68f40c9e1c4aa5c538fa9ef7a7ae63ba5276472b3c1716f0" Namespace="calico-system" Pod="whisker-649c6d6f48-6pq8q" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-whisker--649c6d6f48--6pq8q-eth0" Nov 1 01:29:58.906522 env[1561]: 2025-11-01 01:29:58.825 [INFO][4114] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3934cc389b1 ContainerID="f93aa9316f8bb6aa68f40c9e1c4aa5c538fa9ef7a7ae63ba5276472b3c1716f0" Namespace="calico-system" Pod="whisker-649c6d6f48-6pq8q" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-whisker--649c6d6f48--6pq8q-eth0" Nov 1 01:29:58.906522 env[1561]: 2025-11-01 01:29:58.898 [INFO][4114] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f93aa9316f8bb6aa68f40c9e1c4aa5c538fa9ef7a7ae63ba5276472b3c1716f0" Namespace="calico-system" Pod="whisker-649c6d6f48-6pq8q" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-whisker--649c6d6f48--6pq8q-eth0" Nov 1 01:29:58.906522 env[1561]: 2025-11-01 01:29:58.898 [INFO][4114] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f93aa9316f8bb6aa68f40c9e1c4aa5c538fa9ef7a7ae63ba5276472b3c1716f0" Namespace="calico-system" Pod="whisker-649c6d6f48-6pq8q" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-whisker--649c6d6f48--6pq8q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34cd8b9336-k8s-whisker--649c6d6f48--6pq8q-eth0", GenerateName:"whisker-649c6d6f48-", Namespace:"calico-system", SelfLink:"", UID:"2743a542-7119-47a8-937d-fec5c85bdcf2", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 29, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"649c6d6f48", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34cd8b9336", ContainerID:"f93aa9316f8bb6aa68f40c9e1c4aa5c538fa9ef7a7ae63ba5276472b3c1716f0", Pod:"whisker-649c6d6f48-6pq8q", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.114.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3934cc389b1", MAC:"6a:30:af:9a:90:8d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:29:58.906522 env[1561]: 2025-11-01 01:29:58.904 [INFO][4114] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f93aa9316f8bb6aa68f40c9e1c4aa5c538fa9ef7a7ae63ba5276472b3c1716f0" Namespace="calico-system" Pod="whisker-649c6d6f48-6pq8q" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-whisker--649c6d6f48--6pq8q-eth0" Nov 1 01:29:58.910902 env[1561]: time="2025-11-01T01:29:58.910805418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:29:58.910902 env[1561]: time="2025-11-01T01:29:58.910832619Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:29:58.910902 env[1561]: time="2025-11-01T01:29:58.910842658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:29:58.911029 env[1561]: time="2025-11-01T01:29:58.910921406Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f93aa9316f8bb6aa68f40c9e1c4aa5c538fa9ef7a7ae63ba5276472b3c1716f0 pid=4171 runtime=io.containerd.runc.v2 Nov 1 01:29:58.919317 systemd[1]: Started cri-containerd-f93aa9316f8bb6aa68f40c9e1c4aa5c538fa9ef7a7ae63ba5276472b3c1716f0.scope. Nov 1 01:29:58.922000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.922000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.922000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.922000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.922000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.922000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.922000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.922000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.922000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.922000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.922000 audit: BPF prog-id=142 op=LOAD Nov 1 01:29:58.922000 audit[4182]: AVC avc: denied { bpf } for pid=4182 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.922000 audit[4182]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000117c48 a2=10 a3=1c items=0 ppid=4171 pid=4182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:58.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639336161393331366638626236616136386634306339653163346161 Nov 1 01:29:58.922000 audit[4182]: AVC avc: denied { perfmon } for pid=4182 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.922000 audit[4182]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001176b0 a2=3c a3=c items=0 ppid=4171 pid=4182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:58.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639336161393331366638626236616136386634306339653163346161 Nov 1 01:29:58.923000 audit[4182]: AVC avc: denied { bpf } for pid=4182 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.923000 audit[4182]: AVC avc: denied { bpf } for pid=4182 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.923000 audit[4182]: AVC avc: denied { bpf } for pid=4182 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.923000 audit[4182]: AVC avc: denied { perfmon } for pid=4182 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.923000 audit[4182]: AVC avc: denied { perfmon } for pid=4182 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.923000 audit[4182]: AVC avc: denied { perfmon } for pid=4182 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.923000 audit[4182]: AVC avc: denied { perfmon } for pid=4182 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.923000 audit[4182]: AVC avc: denied { perfmon } for pid=4182 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.923000 audit[4182]: AVC avc: denied { bpf } for pid=4182 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.923000 audit[4182]: AVC avc: denied { bpf } for pid=4182 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.923000 audit: BPF prog-id=143 op=LOAD Nov 1 01:29:58.923000 audit[4182]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001179d8 a2=78 a3=c00020b2a0 items=0 ppid=4171 pid=4182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:58.923000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639336161393331366638626236616136386634306339653163346161 Nov 1 01:29:58.923000 audit[4182]: AVC avc: denied { bpf } for pid=4182 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.923000 audit[4182]: AVC avc: denied { bpf } for pid=4182 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.923000 audit[4182]: AVC avc: denied { perfmon } for pid=4182 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.923000 audit[4182]: AVC avc: denied { perfmon } for pid=4182 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.923000 audit[4182]: AVC avc: denied { perfmon } for pid=4182 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.923000 audit[4182]: AVC avc: denied { perfmon } for pid=4182 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.923000 audit[4182]: AVC avc: denied { perfmon } for pid=4182 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.923000 audit[4182]: AVC avc: denied { bpf } for pid=4182 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.923000 audit[4182]: AVC avc: denied { bpf } for pid=4182 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.923000 audit: BPF prog-id=144 op=LOAD Nov 1 01:29:58.923000 audit[4182]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000117770 a2=78 a3=c00020b2e8 items=0 ppid=4171 pid=4182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:58.923000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639336161393331366638626236616136386634306339653163346161 Nov 1 01:29:58.923000 audit: BPF prog-id=144 op=UNLOAD Nov 1 01:29:58.923000 audit: BPF prog-id=143 op=UNLOAD Nov 1 01:29:58.923000 audit[4182]: AVC avc: denied { bpf } for pid=4182 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.923000 audit[4182]: AVC avc: denied { bpf } for pid=4182 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.923000 audit[4182]: AVC avc: denied { bpf } for pid=4182 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.923000 audit[4182]: AVC avc: denied { perfmon } for pid=4182 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.923000 audit[4182]: AVC avc: denied { perfmon } for pid=4182 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.923000 audit[4182]: AVC avc: denied { perfmon } for pid=4182 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.923000 audit[4182]: AVC avc: denied { perfmon } for pid=4182 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.923000 audit[4182]: AVC avc: denied { perfmon } for pid=4182 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.923000 audit[4182]: AVC avc: denied { bpf } for pid=4182 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.923000 audit[4182]: AVC avc: denied { bpf } for pid=4182 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.923000 audit: BPF prog-id=145 op=LOAD Nov 1 01:29:58.923000 audit[4182]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000117c30 a2=78 a3=c00020b6f8 items=0 ppid=4171 pid=4182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:58.923000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639336161393331366638626236616136386634306339653163346161 Nov 1 01:29:58.937000 audit[4237]: AVC avc: denied { write } for pid=4237 comm="tee" name="fd" dev="proc" ino=38264 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 01:29:58.937000 audit[4244]: AVC avc: denied { write } for pid=4244 comm="tee" name="fd" dev="proc" ino=40964 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 01:29:58.937000 audit[4244]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffff1fbf7c9 a2=241 a3=1b6 items=1 ppid=4206 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:58.937000 audit: CWD cwd="/etc/service/enabled/cni/log" Nov 1 01:29:58.937000 audit: PATH item=0 name="/dev/fd/63" inode=33337 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:29:58.937000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 01:29:58.937000 audit[4242]: AVC avc: denied { write } for pid=4242 comm="tee" name="fd" dev="proc" ino=16366 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 01:29:58.937000 audit[4242]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc22ff47b8 a2=241 a3=1b6 items=1 ppid=4207 pid=4242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:58.937000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Nov 1 01:29:58.937000 audit: PATH item=0 name="/dev/fd/63" inode=40961 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:29:58.937000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 01:29:58.937000 audit[4246]: AVC avc: denied { write } for pid=4246 comm="tee" name="fd" dev="proc" ino=39079 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 01:29:58.937000 audit[4246]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffff6d097c7 a2=241 a3=1b6 items=1 ppid=4211 pid=4246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:58.937000 audit: CWD cwd="/etc/service/enabled/confd/log" Nov 1 01:29:58.937000 audit: PATH item=0 name="/dev/fd/63" inode=19307 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:29:58.937000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 01:29:58.937000 audit[4248]: AVC avc: denied { write } for pid=4248 comm="tee" name="fd" dev="proc" ino=33340 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 01:29:58.937000 audit[4248]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd1f7237b7 a2=241 a3=1b6 items=1 ppid=4212 pid=4248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:58.937000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Nov 1 01:29:58.937000 audit: PATH item=0 name="/dev/fd/63" inode=32330 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:29:58.937000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 01:29:58.937000 audit[4237]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffb24f77c7 a2=241 a3=1b6 items=1 ppid=4208 pid=4237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:58.937000 audit: CWD cwd="/etc/service/enabled/bird6/log" Nov 1 01:29:58.937000 audit: PATH item=0 name="/dev/fd/63" inode=39076 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:29:58.937000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 01:29:58.937000 audit[4252]: AVC avc: denied { write } for pid=4252 comm="tee" name="fd" dev="proc" ino=40005 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 01:29:58.937000 audit[4252]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc7cdba7c8 a2=241 a3=1b6 items=1 ppid=4220 pid=4252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:58.937000 audit: CWD cwd="/etc/service/enabled/bird/log" Nov 1 01:29:58.937000 audit: PATH item=0 name="/dev/fd/63" inode=34375 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:29:58.937000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 01:29:58.938000 audit[4277]: AVC avc: denied { write } for pid=4277 comm="tee" name="fd" dev="proc" ino=31287 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 01:29:58.938000 audit[4277]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff78f9d7c7 a2=241 a3=1b6 items=1 ppid=4209 pid=4277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:58.938000 audit: CWD cwd="/etc/service/enabled/felix/log" Nov 1 01:29:58.938000 audit: PATH item=0 name="/dev/fd/63" inode=32331 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:29:58.938000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 01:29:58.945307 env[1561]: time="2025-11-01T01:29:58.943728286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-649c6d6f48-6pq8q,Uid:2743a542-7119-47a8-937d-fec5c85bdcf2,Namespace:calico-system,Attempt:0,} returns sandbox id \"f93aa9316f8bb6aa68f40c9e1c4aa5c538fa9ef7a7ae63ba5276472b3c1716f0\"" Nov 1 01:29:58.945769 env[1561]: time="2025-11-01T01:29:58.945709932Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 01:29:58.989000 audit[4353]: AVC avc: denied { bpf } for pid=4353 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.989000 audit[4353]: AVC avc: denied { bpf } for pid=4353 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.989000 audit[4353]: AVC avc: denied { perfmon } for pid=4353 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.989000 audit[4353]: AVC avc: denied { perfmon } for pid=4353 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.989000 audit[4353]: AVC avc: denied { perfmon } for pid=4353 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.989000 audit[4353]: AVC avc: denied { perfmon } for pid=4353 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.989000 audit[4353]: AVC avc: denied { perfmon } for pid=4353 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.989000 audit[4353]: AVC avc: denied { bpf } for pid=4353 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.989000 audit[4353]: AVC avc: denied { bpf } for pid=4353 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.989000 audit: BPF prog-id=146 op=LOAD Nov 1 01:29:58.989000 audit[4353]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffe99a54c0 a2=98 a3=1fffffffffffffff items=0 ppid=4216 pid=4353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:58.989000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Nov 1 01:29:58.989000 audit: BPF prog-id=146 op=UNLOAD Nov 1 01:29:58.989000 audit[4353]: AVC avc: denied { bpf } for pid=4353 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.989000 audit[4353]: AVC avc: denied { bpf } for pid=4353 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.989000 audit[4353]: AVC avc: denied { perfmon } for pid=4353 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.989000 audit[4353]: AVC avc: denied { perfmon } for pid=4353 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.989000 audit[4353]: AVC avc: denied { perfmon } for pid=4353 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.989000 audit[4353]: AVC avc: denied { perfmon } for pid=4353 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.989000 audit[4353]: AVC avc: denied { perfmon } for pid=4353 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.989000 audit[4353]: AVC avc: denied { bpf } for pid=4353 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.989000 audit[4353]: AVC avc: denied { bpf } for pid=4353 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.989000 audit: BPF prog-id=147 op=LOAD Nov 1 01:29:58.989000 audit[4353]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffe99a53a0 a2=94 a3=3 items=0 ppid=4216 pid=4353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:58.989000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Nov 1 01:29:58.989000 audit: BPF prog-id=147 op=UNLOAD Nov 1 01:29:58.989000 audit[4353]: AVC avc: denied { bpf } for pid=4353 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.989000 audit[4353]: AVC avc: denied { bpf } for pid=4353 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.989000 audit[4353]: AVC avc: denied { perfmon } for pid=4353 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.989000 audit[4353]: AVC avc: denied { perfmon } for pid=4353 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.989000 audit[4353]: AVC avc: denied { perfmon } for pid=4353 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.989000 audit[4353]: AVC avc: denied { perfmon } for pid=4353 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.989000 audit[4353]: AVC avc: denied { perfmon } for pid=4353 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.989000 audit[4353]: AVC avc: denied { bpf } for pid=4353 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.989000 audit[4353]: AVC avc: denied { bpf } for pid=4353 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.989000 audit: BPF prog-id=148 op=LOAD Nov 1 01:29:58.989000 audit[4353]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffe99a53e0 a2=94 a3=7fffe99a55c0 items=0 ppid=4216 pid=4353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:58.989000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Nov 1 01:29:58.989000 audit: BPF prog-id=148 op=UNLOAD Nov 1 01:29:58.989000 audit[4353]: AVC avc: denied { perfmon } for pid=4353 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.989000 audit[4353]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7fffe99a54b0 a2=50 a3=a000000085 items=0 ppid=4216 pid=4353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:58.989000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Nov 1 01:29:58.990000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.990000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.990000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.990000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.990000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.990000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.990000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.990000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.990000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.990000 audit: BPF prog-id=149 op=LOAD Nov 1 01:29:58.990000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd3bfc4430 a2=98 a3=3 items=0 ppid=4216 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:58.990000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:29:58.990000 audit: BPF prog-id=149 op=UNLOAD Nov 1 01:29:58.990000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.990000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.990000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.990000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.990000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.990000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.990000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.990000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.990000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.990000 audit: BPF prog-id=150 op=LOAD Nov 1 01:29:58.990000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd3bfc4220 a2=94 a3=54428f items=0 ppid=4216 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:58.990000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:29:58.990000 audit: BPF prog-id=150 op=UNLOAD Nov 1 01:29:58.990000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.990000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.990000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.990000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.990000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.990000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.990000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.990000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.990000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:58.990000 audit: BPF prog-id=151 op=LOAD Nov 1 01:29:58.990000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd3bfc4250 a2=94 a3=2 items=0 ppid=4216 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:58.990000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:29:58.990000 audit: BPF prog-id=151 op=UNLOAD Nov 1 01:29:59.074000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.074000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.074000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.074000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.074000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.074000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.074000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.074000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.074000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.074000 audit: BPF prog-id=152 op=LOAD Nov 1 01:29:59.074000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd3bfc4110 a2=94 a3=1 items=0 ppid=4216 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.074000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:29:59.074000 audit: BPF prog-id=152 op=UNLOAD Nov 1 01:29:59.074000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.074000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffd3bfc41e0 a2=50 a3=7ffd3bfc42c0 items=0 ppid=4216 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.074000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:29:59.080000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.080000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd3bfc4120 a2=28 a3=0 items=0 ppid=4216 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.080000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:29:59.080000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.080000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd3bfc4150 a2=28 a3=0 items=0 ppid=4216 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.080000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:29:59.080000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.080000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd3bfc4060 a2=28 a3=0 items=0 ppid=4216 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.080000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:29:59.080000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.080000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd3bfc4170 a2=28 a3=0 items=0 ppid=4216 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.080000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:29:59.080000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.080000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd3bfc4150 a2=28 a3=0 items=0 ppid=4216 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.080000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:29:59.080000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.080000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd3bfc4140 a2=28 a3=0 items=0 ppid=4216 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.080000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:29:59.080000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.080000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd3bfc4170 a2=28 a3=0 items=0 ppid=4216 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.080000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:29:59.080000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.080000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd3bfc4150 a2=28 a3=0 items=0 ppid=4216 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.080000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:29:59.080000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.080000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd3bfc4170 a2=28 a3=0 items=0 ppid=4216 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.080000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:29:59.080000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.080000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd3bfc4140 a2=28 a3=0 items=0 ppid=4216 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.080000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:29:59.080000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.080000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd3bfc41b0 a2=28 a3=0 items=0 ppid=4216 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.080000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:29:59.080000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.080000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffd3bfc3f60 a2=50 a3=1 items=0 ppid=4216 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.080000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:29:59.080000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.080000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.080000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.080000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.080000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.080000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.080000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.080000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.080000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.080000 audit: BPF prog-id=153 op=LOAD Nov 1 01:29:59.080000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd3bfc3f60 a2=94 a3=5 items=0 ppid=4216 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.080000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:29:59.081000 audit: BPF prog-id=153 op=UNLOAD Nov 1 01:29:59.081000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.081000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffd3bfc4010 a2=50 a3=1 items=0 ppid=4216 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.081000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:29:59.081000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.081000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffd3bfc4130 a2=4 a3=38 items=0 ppid=4216 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.081000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:29:59.081000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.081000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.081000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.081000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.081000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.081000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.081000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.081000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.081000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.081000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.081000 audit[4354]: AVC avc: denied { confidentiality } for pid=4354 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 01:29:59.081000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd3bfc4180 a2=94 a3=6 items=0 ppid=4216 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.081000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:29:59.081000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.081000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.081000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.081000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.081000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.081000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.081000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.081000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.081000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.081000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.081000 audit[4354]: AVC avc: denied { confidentiality } for pid=4354 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 01:29:59.081000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd3bfc3930 a2=94 a3=88 items=0 ppid=4216 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.081000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:29:59.081000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.081000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.081000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.081000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.081000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.081000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.081000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.081000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.081000 audit[4354]: AVC avc: denied { perfmon } for pid=4354 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.081000 audit[4354]: AVC avc: denied { bpf } for pid=4354 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.081000 audit[4354]: AVC avc: denied { confidentiality } for pid=4354 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 01:29:59.081000 audit[4354]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd3bfc3930 a2=94 a3=88 items=0 ppid=4216 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.081000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:29:59.085000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.085000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.085000 audit[4357]: AVC avc: denied { perfmon } for pid=4357 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.085000 audit[4357]: AVC avc: denied { perfmon } for pid=4357 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.085000 audit[4357]: AVC avc: denied { perfmon } for pid=4357 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.085000 audit[4357]: AVC avc: denied { perfmon } for pid=4357 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.085000 audit[4357]: AVC avc: denied { perfmon } for pid=4357 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.085000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.085000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.085000 audit: BPF prog-id=154 op=LOAD Nov 1 01:29:59.085000 audit[4357]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffe2bdfb50 a2=98 a3=1999999999999999 items=0 ppid=4216 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.085000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Nov 1 01:29:59.085000 audit: BPF prog-id=154 op=UNLOAD Nov 1 01:29:59.085000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.085000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.085000 audit[4357]: AVC avc: denied { perfmon } for pid=4357 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.085000 audit[4357]: AVC avc: denied { perfmon } for pid=4357 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.085000 audit[4357]: AVC avc: denied { perfmon } for pid=4357 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.085000 audit[4357]: AVC avc: denied { perfmon } for pid=4357 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.085000 audit[4357]: AVC avc: denied { perfmon } for pid=4357 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.085000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.085000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.085000 audit: BPF prog-id=155 op=LOAD Nov 1 01:29:59.085000 audit[4357]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffe2bdfa30 a2=94 a3=ffff items=0 ppid=4216 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.085000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Nov 1 01:29:59.085000 audit: BPF prog-id=155 op=UNLOAD Nov 1 01:29:59.085000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.085000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.085000 audit[4357]: AVC avc: denied { perfmon } for pid=4357 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.085000 audit[4357]: AVC avc: denied { perfmon } for pid=4357 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.085000 audit[4357]: AVC avc: denied { perfmon } for pid=4357 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.085000 audit[4357]: AVC avc: denied { perfmon } for pid=4357 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.085000 audit[4357]: AVC avc: denied { perfmon } for pid=4357 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.085000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.085000 audit[4357]: AVC avc: denied { bpf } for pid=4357 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.085000 audit: BPF prog-id=156 op=LOAD Nov 1 01:29:59.085000 audit[4357]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffe2bdfa70 a2=94 a3=7fffe2bdfc50 items=0 ppid=4216 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.085000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Nov 1 01:29:59.085000 audit: BPF prog-id=156 op=UNLOAD Nov 1 01:29:59.113000 audit[4385]: AVC avc: denied { bpf } for pid=4385 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.113000 audit[4385]: AVC avc: denied { bpf } for pid=4385 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.113000 audit[4385]: AVC avc: denied { perfmon } for pid=4385 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.113000 audit[4385]: AVC avc: denied { perfmon } for pid=4385 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.113000 audit[4385]: AVC avc: denied { perfmon } for pid=4385 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.113000 audit[4385]: AVC avc: denied { perfmon } for pid=4385 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.113000 audit[4385]: AVC avc: denied { perfmon } for pid=4385 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.113000 audit[4385]: AVC avc: denied { bpf } for pid=4385 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.113000 audit[4385]: AVC avc: denied { bpf } for pid=4385 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.113000 audit: BPF prog-id=157 op=LOAD Nov 1 01:29:59.113000 audit[4385]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe464d58e0 a2=98 a3=0 items=0 ppid=4216 pid=4385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.113000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:29:59.114000 audit: BPF prog-id=157 op=UNLOAD Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { bpf } for pid=4385 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { bpf } for pid=4385 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { perfmon } for pid=4385 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { perfmon } for pid=4385 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { perfmon } for pid=4385 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { perfmon } for pid=4385 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { perfmon } for pid=4385 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { bpf } for pid=4385 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { bpf } for pid=4385 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit: BPF prog-id=158 op=LOAD Nov 1 01:29:59.114000 audit[4385]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe464d56f0 a2=94 a3=54428f items=0 ppid=4216 pid=4385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.114000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:29:59.114000 audit: BPF prog-id=158 op=UNLOAD Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { bpf } for pid=4385 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { bpf } for pid=4385 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { perfmon } for pid=4385 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { perfmon } for pid=4385 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { perfmon } for pid=4385 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { perfmon } for pid=4385 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { perfmon } for pid=4385 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { bpf } for pid=4385 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { bpf } for pid=4385 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit: BPF prog-id=159 op=LOAD Nov 1 01:29:59.114000 audit[4385]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe464d5720 a2=94 a3=2 items=0 ppid=4216 pid=4385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.114000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:29:59.114000 audit: BPF prog-id=159 op=UNLOAD Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { bpf } for pid=4385 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe464d55f0 a2=28 a3=0 items=0 ppid=4216 pid=4385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.114000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { bpf } for pid=4385 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe464d5620 a2=28 a3=0 items=0 ppid=4216 pid=4385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.114000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { bpf } for pid=4385 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe464d5530 a2=28 a3=0 items=0 ppid=4216 pid=4385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.114000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { bpf } for pid=4385 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe464d5640 a2=28 a3=0 items=0 ppid=4216 pid=4385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.114000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { bpf } for pid=4385 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe464d5620 a2=28 a3=0 items=0 ppid=4216 pid=4385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.114000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { bpf } for pid=4385 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe464d5610 a2=28 a3=0 items=0 ppid=4216 pid=4385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.114000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { bpf } for pid=4385 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe464d5640 a2=28 a3=0 items=0 ppid=4216 pid=4385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.114000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { bpf } for pid=4385 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe464d5620 a2=28 a3=0 items=0 ppid=4216 pid=4385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.114000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { bpf } for pid=4385 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe464d5640 a2=28 a3=0 items=0 ppid=4216 pid=4385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.114000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { bpf } for pid=4385 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe464d5610 a2=28 a3=0 items=0 ppid=4216 pid=4385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.114000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { bpf } for pid=4385 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe464d5680 a2=28 a3=0 items=0 ppid=4216 pid=4385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.114000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { bpf } for pid=4385 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { bpf } for pid=4385 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { perfmon } for pid=4385 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { perfmon } for pid=4385 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { perfmon } for pid=4385 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { perfmon } for pid=4385 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { perfmon } for pid=4385 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { bpf } for pid=4385 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { bpf } for pid=4385 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit: BPF prog-id=160 op=LOAD Nov 1 01:29:59.114000 audit[4385]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe464d54f0 a2=94 a3=0 items=0 ppid=4216 pid=4385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.114000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:29:59.114000 audit: BPF prog-id=160 op=UNLOAD Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { bpf } for pid=4385 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffe464d54e0 a2=50 a3=2800 items=0 ppid=4216 pid=4385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.114000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { bpf } for pid=4385 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffe464d54e0 a2=50 a3=2800 items=0 ppid=4216 pid=4385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.114000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { bpf } for pid=4385 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { bpf } for pid=4385 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { bpf } for pid=4385 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { perfmon } for pid=4385 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { perfmon } for pid=4385 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { perfmon } for pid=4385 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { perfmon } for pid=4385 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { perfmon } for pid=4385 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { bpf } for pid=4385 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { bpf } for pid=4385 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit: BPF prog-id=161 op=LOAD Nov 1 01:29:59.114000 audit[4385]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe464d4d00 a2=94 a3=2 items=0 ppid=4216 pid=4385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.114000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:29:59.114000 audit: BPF prog-id=161 op=UNLOAD Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { bpf } for pid=4385 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { bpf } for pid=4385 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { bpf } for pid=4385 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { perfmon } for pid=4385 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { perfmon } for pid=4385 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { perfmon } for pid=4385 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { perfmon } for pid=4385 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { perfmon } for pid=4385 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { bpf } for pid=4385 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit[4385]: AVC avc: denied { bpf } for pid=4385 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.114000 audit: BPF prog-id=162 op=LOAD Nov 1 01:29:59.114000 audit[4385]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe464d4e00 a2=94 a3=30 items=0 ppid=4216 pid=4385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.114000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:29:59.115000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.115000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.115000 audit[4391]: AVC avc: denied { perfmon } for pid=4391 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.115000 audit[4391]: AVC avc: denied { perfmon } for pid=4391 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.115000 audit[4391]: AVC avc: denied { perfmon } for pid=4391 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.115000 audit[4391]: AVC avc: denied { perfmon } for pid=4391 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.115000 audit[4391]: AVC avc: denied { perfmon } for pid=4391 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.115000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.115000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.115000 audit: BPF prog-id=163 op=LOAD Nov 1 01:29:59.115000 audit[4391]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff95805540 a2=98 a3=0 items=0 ppid=4216 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.115000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:29:59.115000 audit: BPF prog-id=163 op=UNLOAD Nov 1 01:29:59.116000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.116000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.116000 audit[4391]: AVC avc: denied { perfmon } for pid=4391 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.116000 audit[4391]: AVC avc: denied { perfmon } for pid=4391 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.116000 audit[4391]: AVC avc: denied { perfmon } for pid=4391 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.116000 audit[4391]: AVC avc: denied { perfmon } for pid=4391 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.116000 audit[4391]: AVC avc: denied { perfmon } for pid=4391 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.116000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.116000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.116000 audit: BPF prog-id=164 op=LOAD Nov 1 01:29:59.116000 audit[4391]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff95805330 a2=94 a3=54428f items=0 ppid=4216 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.116000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:29:59.116000 audit: BPF prog-id=164 op=UNLOAD Nov 1 01:29:59.116000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.116000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.116000 audit[4391]: AVC avc: denied { perfmon } for pid=4391 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.116000 audit[4391]: AVC avc: denied { perfmon } for pid=4391 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.116000 audit[4391]: AVC avc: denied { perfmon } for pid=4391 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.116000 audit[4391]: AVC avc: denied { perfmon } for pid=4391 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.116000 audit[4391]: AVC avc: denied { perfmon } for pid=4391 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.116000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.116000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.116000 audit: BPF prog-id=165 op=LOAD Nov 1 01:29:59.116000 audit[4391]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff95805360 a2=94 a3=2 items=0 ppid=4216 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.116000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:29:59.116000 audit: BPF prog-id=165 op=UNLOAD Nov 1 01:29:59.115518 systemd-networkd[1315]: vxlan.calico: Link UP Nov 1 01:29:59.115524 systemd-networkd[1315]: vxlan.calico: Gained carrier Nov 1 01:29:59.202000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.202000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.202000 audit[4391]: AVC avc: denied { perfmon } for pid=4391 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.202000 audit[4391]: AVC avc: denied { perfmon } for pid=4391 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.202000 audit[4391]: AVC avc: denied { perfmon } for pid=4391 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.202000 audit[4391]: AVC avc: denied { perfmon } for pid=4391 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.202000 audit[4391]: AVC avc: denied { perfmon } for pid=4391 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.202000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.202000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.202000 audit: BPF prog-id=166 op=LOAD Nov 1 01:29:59.202000 audit[4391]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff95805220 a2=94 a3=1 items=0 ppid=4216 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.202000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:29:59.202000 audit: BPF prog-id=166 op=UNLOAD Nov 1 01:29:59.202000 audit[4391]: AVC avc: denied { perfmon } for pid=4391 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.202000 audit[4391]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7fff958052f0 a2=50 a3=7fff958053d0 items=0 ppid=4216 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.202000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:29:59.209000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.209000 audit[4391]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff95805230 a2=28 a3=0 items=0 ppid=4216 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.209000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:29:59.209000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.209000 audit[4391]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff95805260 a2=28 a3=0 items=0 ppid=4216 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.209000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:29:59.209000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.209000 audit[4391]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff95805170 a2=28 a3=0 items=0 ppid=4216 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.209000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:29:59.209000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.209000 audit[4391]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff95805280 a2=28 a3=0 items=0 ppid=4216 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.209000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:29:59.209000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.209000 audit[4391]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff95805260 a2=28 a3=0 items=0 ppid=4216 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.209000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:29:59.209000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.209000 audit[4391]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff95805250 a2=28 a3=0 items=0 ppid=4216 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.209000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:29:59.209000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.209000 audit[4391]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff95805280 a2=28 a3=0 items=0 ppid=4216 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.209000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:29:59.209000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.209000 audit[4391]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff95805260 a2=28 a3=0 items=0 ppid=4216 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.209000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:29:59.209000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.209000 audit[4391]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff95805280 a2=28 a3=0 items=0 ppid=4216 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.209000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:29:59.209000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.209000 audit[4391]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff95805250 a2=28 a3=0 items=0 ppid=4216 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.209000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:29:59.209000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.209000 audit[4391]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff958052c0 a2=28 a3=0 items=0 ppid=4216 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.209000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:29:59.209000 audit[4391]: AVC avc: denied { perfmon } for pid=4391 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.209000 audit[4391]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fff95805070 a2=50 a3=1 items=0 ppid=4216 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.209000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:29:59.209000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.209000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.209000 audit[4391]: AVC avc: denied { perfmon } for pid=4391 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.209000 audit[4391]: AVC avc: denied { perfmon } for pid=4391 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.209000 audit[4391]: AVC avc: denied { perfmon } for pid=4391 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.209000 audit[4391]: AVC avc: denied { perfmon } for pid=4391 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.209000 audit[4391]: AVC avc: denied { perfmon } for pid=4391 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.209000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.209000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.209000 audit: BPF prog-id=167 op=LOAD Nov 1 01:29:59.209000 audit[4391]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff95805070 a2=94 a3=5 items=0 ppid=4216 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.209000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:29:59.209000 audit: BPF prog-id=167 op=UNLOAD Nov 1 01:29:59.210000 audit[4391]: AVC avc: denied { perfmon } for pid=4391 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.210000 audit[4391]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fff95805120 a2=50 a3=1 items=0 ppid=4216 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.210000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:29:59.210000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.210000 audit[4391]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7fff95805240 a2=4 a3=38 items=0 ppid=4216 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.210000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:29:59.210000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.210000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.210000 audit[4391]: AVC avc: denied { perfmon } for pid=4391 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.210000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.210000 audit[4391]: AVC avc: denied { perfmon } for pid=4391 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.210000 audit[4391]: AVC avc: denied { perfmon } for pid=4391 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.210000 audit[4391]: AVC avc: denied { perfmon } for pid=4391 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.210000 audit[4391]: AVC avc: denied { perfmon } for pid=4391 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.210000 audit[4391]: AVC avc: denied { perfmon } for pid=4391 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.210000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.210000 audit[4391]: AVC avc: denied { confidentiality } for pid=4391 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 01:29:59.210000 audit[4391]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff95805290 a2=94 a3=6 items=0 ppid=4216 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.210000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:29:59.210000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.210000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.210000 audit[4391]: AVC avc: denied { perfmon } for pid=4391 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.210000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.210000 audit[4391]: AVC avc: denied { perfmon } for pid=4391 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.210000 audit[4391]: AVC avc: denied { perfmon } for pid=4391 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.210000 audit[4391]: AVC avc: denied { perfmon } for pid=4391 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.210000 audit[4391]: AVC avc: denied { perfmon } for pid=4391 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.210000 audit[4391]: AVC avc: denied { perfmon } for pid=4391 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.210000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.210000 audit[4391]: AVC avc: denied { confidentiality } for pid=4391 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 01:29:59.210000 audit[4391]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff95804a40 a2=94 a3=88 items=0 ppid=4216 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.210000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:29:59.210000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.210000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.210000 audit[4391]: AVC avc: denied { perfmon } for pid=4391 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.210000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.210000 audit[4391]: AVC avc: denied { perfmon } for pid=4391 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.210000 audit[4391]: AVC avc: denied { perfmon } for pid=4391 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.210000 audit[4391]: AVC avc: denied { perfmon } for pid=4391 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.210000 audit[4391]: AVC avc: denied { perfmon } for pid=4391 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.210000 audit[4391]: AVC avc: denied { perfmon } for pid=4391 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.210000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.210000 audit[4391]: AVC avc: denied { confidentiality } for pid=4391 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 01:29:59.210000 audit[4391]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff95804a40 a2=94 a3=88 items=0 ppid=4216 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.210000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:29:59.210000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.210000 audit[4391]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff95806470 a2=10 a3=f8f00800 items=0 ppid=4216 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.210000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:29:59.210000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.210000 audit[4391]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff95806310 a2=10 a3=3 items=0 ppid=4216 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.210000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:29:59.210000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.210000 audit[4391]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff958062b0 a2=10 a3=3 items=0 ppid=4216 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.210000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:29:59.210000 audit[4391]: AVC avc: denied { bpf } for pid=4391 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:29:59.210000 audit[4391]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff958062b0 a2=10 a3=7 items=0 ppid=4216 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.210000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:29:59.227000 audit: BPF prog-id=162 op=UNLOAD Nov 1 01:29:59.261000 audit[4449]: NETFILTER_CFG table=mangle:101 family=2 entries=16 op=nft_register_chain pid=4449 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 01:29:59.261000 audit[4449]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fff36880db0 a2=0 a3=7fff36880d9c items=0 ppid=4216 pid=4449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.261000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 01:29:59.264000 audit[4447]: NETFILTER_CFG table=nat:102 family=2 entries=15 op=nft_register_chain pid=4447 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 01:29:59.264000 audit[4447]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffd10ae62a0 a2=0 a3=7ffd10ae628c items=0 ppid=4216 pid=4447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.264000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 01:29:59.267000 audit[4448]: NETFILTER_CFG table=raw:103 family=2 entries=21 op=nft_register_chain pid=4448 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 01:29:59.267000 audit[4448]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffd4bb18860 a2=0 a3=7ffd4bb1884c items=0 ppid=4216 pid=4448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.267000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 01:29:59.296171 env[1561]: time="2025-11-01T01:29:59.296106184Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:29:59.296565 env[1561]: time="2025-11-01T01:29:59.296466681Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 01:29:59.296759 kubelet[2505]: E1101 01:29:59.296699 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:29:59.296759 kubelet[2505]: E1101 01:29:59.296737 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:29:59.296861 kubelet[2505]: E1101 01:29:59.296802 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-649c6d6f48-6pq8q_calico-system(2743a542-7119-47a8-937d-fec5c85bdcf2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 01:29:59.297416 env[1561]: time="2025-11-01T01:29:59.297365171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 01:29:59.269000 audit[4452]: NETFILTER_CFG table=filter:104 family=2 entries=94 op=nft_register_chain pid=4452 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 01:29:59.269000 audit[4452]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffe3682fda0 a2=0 a3=7ffe3682fd8c items=0 ppid=4216 pid=4452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:29:59.269000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 01:29:59.707201 env[1561]: time="2025-11-01T01:29:59.707176820Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:29:59.707669 env[1561]: time="2025-11-01T01:29:59.707645121Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 01:29:59.707847 kubelet[2505]: E1101 01:29:59.707812 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:29:59.708024 kubelet[2505]: E1101 01:29:59.707854 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:29:59.708024 kubelet[2505]: E1101 01:29:59.707901 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-649c6d6f48-6pq8q_calico-system(2743a542-7119-47a8-937d-fec5c85bdcf2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 01:29:59.708024 kubelet[2505]: E1101 01:29:59.707927 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649c6d6f48-6pq8q" podUID="2743a542-7119-47a8-937d-fec5c85bdcf2" Nov 1 01:30:00.154033 env[1561]: time="2025-11-01T01:30:00.153956533Z" level=info msg="StopPodSandbox for \"c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a\"" Nov 1 01:30:00.154919 kubelet[2505]: I1101 01:30:00.154900 2505 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="699e288f-cbfb-40ea-bb6d-670640afd205" path="/var/lib/kubelet/pods/699e288f-cbfb-40ea-bb6d-670640afd205/volumes" Nov 1 01:30:00.200879 env[1561]: 2025-11-01 01:30:00.180 [INFO][4473] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a" Nov 1 01:30:00.200879 env[1561]: 2025-11-01 01:30:00.180 [INFO][4473] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a" iface="eth0" netns="/var/run/netns/cni-95a7f9be-aabe-ca0f-fe5d-3b9f24a72cae" Nov 1 01:30:00.200879 env[1561]: 2025-11-01 01:30:00.181 [INFO][4473] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a" iface="eth0" netns="/var/run/netns/cni-95a7f9be-aabe-ca0f-fe5d-3b9f24a72cae" Nov 1 01:30:00.200879 env[1561]: 2025-11-01 01:30:00.181 [INFO][4473] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a" iface="eth0" netns="/var/run/netns/cni-95a7f9be-aabe-ca0f-fe5d-3b9f24a72cae" Nov 1 01:30:00.200879 env[1561]: 2025-11-01 01:30:00.181 [INFO][4473] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a" Nov 1 01:30:00.200879 env[1561]: 2025-11-01 01:30:00.181 [INFO][4473] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a" Nov 1 01:30:00.200879 env[1561]: 2025-11-01 01:30:00.193 [INFO][4490] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a" HandleID="k8s-pod-network.c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a" Workload="ci--3510.3.8--n--34cd8b9336-k8s-calico--kube--controllers--5bff9f9fd4--4vmq2-eth0" Nov 1 01:30:00.200879 env[1561]: 2025-11-01 01:30:00.193 [INFO][4490] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:30:00.200879 env[1561]: 2025-11-01 01:30:00.194 [INFO][4490] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:30:00.200879 env[1561]: 2025-11-01 01:30:00.198 [WARNING][4490] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a" HandleID="k8s-pod-network.c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a" Workload="ci--3510.3.8--n--34cd8b9336-k8s-calico--kube--controllers--5bff9f9fd4--4vmq2-eth0" Nov 1 01:30:00.200879 env[1561]: 2025-11-01 01:30:00.198 [INFO][4490] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a" HandleID="k8s-pod-network.c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a" Workload="ci--3510.3.8--n--34cd8b9336-k8s-calico--kube--controllers--5bff9f9fd4--4vmq2-eth0" Nov 1 01:30:00.200879 env[1561]: 2025-11-01 01:30:00.199 [INFO][4490] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:30:00.200879 env[1561]: 2025-11-01 01:30:00.200 [INFO][4473] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a" Nov 1 01:30:00.201214 env[1561]: time="2025-11-01T01:30:00.200967022Z" level=info msg="TearDown network for sandbox \"c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a\" successfully" Nov 1 01:30:00.201214 env[1561]: time="2025-11-01T01:30:00.200993562Z" level=info msg="StopPodSandbox for \"c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a\" returns successfully" Nov 1 01:30:00.202540 systemd[1]: run-netns-cni\x2d95a7f9be\x2daabe\x2dca0f\x2dfe5d\x2d3b9f24a72cae.mount: Deactivated successfully. Nov 1 01:30:00.203073 env[1561]: time="2025-11-01T01:30:00.203060520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bff9f9fd4-4vmq2,Uid:593908a5-f718-4b03-b095-540ff204a4bd,Namespace:calico-system,Attempt:1,}" Nov 1 01:30:00.242505 systemd-networkd[1315]: cali3934cc389b1: Gained IPv6LL Nov 1 01:30:00.263001 systemd-networkd[1315]: cali141d6dd5341: Link UP Nov 1 01:30:00.291130 systemd-networkd[1315]: cali141d6dd5341: Gained carrier Nov 1 01:30:00.291404 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali141d6dd5341: link becomes ready Nov 1 01:30:00.297848 env[1561]: 2025-11-01 01:30:00.224 [INFO][4503] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--34cd8b9336-k8s-calico--kube--controllers--5bff9f9fd4--4vmq2-eth0 calico-kube-controllers-5bff9f9fd4- calico-system 593908a5-f718-4b03-b095-540ff204a4bd 902 0 2025-11-01 01:29:37 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5bff9f9fd4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3510.3.8-n-34cd8b9336 calico-kube-controllers-5bff9f9fd4-4vmq2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali141d6dd5341 [] [] }} ContainerID="f1082a4dde71afea80a2f88419bbf26033601c06bb6b4ba4798b5ff16fa12d88" Namespace="calico-system" Pod="calico-kube-controllers-5bff9f9fd4-4vmq2" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-calico--kube--controllers--5bff9f9fd4--4vmq2-" Nov 1 01:30:00.297848 env[1561]: 2025-11-01 01:30:00.224 [INFO][4503] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f1082a4dde71afea80a2f88419bbf26033601c06bb6b4ba4798b5ff16fa12d88" Namespace="calico-system" Pod="calico-kube-controllers-5bff9f9fd4-4vmq2" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-calico--kube--controllers--5bff9f9fd4--4vmq2-eth0" Nov 1 01:30:00.297848 env[1561]: 2025-11-01 01:30:00.238 [INFO][4525] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f1082a4dde71afea80a2f88419bbf26033601c06bb6b4ba4798b5ff16fa12d88" HandleID="k8s-pod-network.f1082a4dde71afea80a2f88419bbf26033601c06bb6b4ba4798b5ff16fa12d88" Workload="ci--3510.3.8--n--34cd8b9336-k8s-calico--kube--controllers--5bff9f9fd4--4vmq2-eth0" Nov 1 01:30:00.297848 env[1561]: 2025-11-01 01:30:00.238 [INFO][4525] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f1082a4dde71afea80a2f88419bbf26033601c06bb6b4ba4798b5ff16fa12d88" HandleID="k8s-pod-network.f1082a4dde71afea80a2f88419bbf26033601c06bb6b4ba4798b5ff16fa12d88" Workload="ci--3510.3.8--n--34cd8b9336-k8s-calico--kube--controllers--5bff9f9fd4--4vmq2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e8770), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-34cd8b9336", "pod":"calico-kube-controllers-5bff9f9fd4-4vmq2", "timestamp":"2025-11-01 01:30:00.238653859 +0000 UTC"}, Hostname:"ci-3510.3.8-n-34cd8b9336", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:30:00.297848 env[1561]: 2025-11-01 01:30:00.238 [INFO][4525] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:30:00.297848 env[1561]: 2025-11-01 01:30:00.238 [INFO][4525] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:30:00.297848 env[1561]: 2025-11-01 01:30:00.238 [INFO][4525] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-34cd8b9336' Nov 1 01:30:00.297848 env[1561]: 2025-11-01 01:30:00.244 [INFO][4525] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f1082a4dde71afea80a2f88419bbf26033601c06bb6b4ba4798b5ff16fa12d88" host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:00.297848 env[1561]: 2025-11-01 01:30:00.247 [INFO][4525] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:00.297848 env[1561]: 2025-11-01 01:30:00.250 [INFO][4525] ipam/ipam.go 511: Trying affinity for 192.168.114.0/26 host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:00.297848 env[1561]: 2025-11-01 01:30:00.252 [INFO][4525] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.0/26 host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:00.297848 env[1561]: 2025-11-01 01:30:00.253 [INFO][4525] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:00.297848 env[1561]: 2025-11-01 01:30:00.253 [INFO][4525] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.f1082a4dde71afea80a2f88419bbf26033601c06bb6b4ba4798b5ff16fa12d88" host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:00.297848 env[1561]: 2025-11-01 01:30:00.255 [INFO][4525] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f1082a4dde71afea80a2f88419bbf26033601c06bb6b4ba4798b5ff16fa12d88 Nov 1 01:30:00.297848 env[1561]: 2025-11-01 01:30:00.257 [INFO][4525] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.f1082a4dde71afea80a2f88419bbf26033601c06bb6b4ba4798b5ff16fa12d88" host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:00.297848 env[1561]: 2025-11-01 01:30:00.261 [INFO][4525] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.2/26] block=192.168.114.0/26 handle="k8s-pod-network.f1082a4dde71afea80a2f88419bbf26033601c06bb6b4ba4798b5ff16fa12d88" host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:00.297848 env[1561]: 2025-11-01 01:30:00.261 [INFO][4525] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.2/26] handle="k8s-pod-network.f1082a4dde71afea80a2f88419bbf26033601c06bb6b4ba4798b5ff16fa12d88" host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:00.297848 env[1561]: 2025-11-01 01:30:00.261 [INFO][4525] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:30:00.297848 env[1561]: 2025-11-01 01:30:00.261 [INFO][4525] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.2/26] IPv6=[] ContainerID="f1082a4dde71afea80a2f88419bbf26033601c06bb6b4ba4798b5ff16fa12d88" HandleID="k8s-pod-network.f1082a4dde71afea80a2f88419bbf26033601c06bb6b4ba4798b5ff16fa12d88" Workload="ci--3510.3.8--n--34cd8b9336-k8s-calico--kube--controllers--5bff9f9fd4--4vmq2-eth0" Nov 1 01:30:00.298287 env[1561]: 2025-11-01 01:30:00.262 [INFO][4503] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f1082a4dde71afea80a2f88419bbf26033601c06bb6b4ba4798b5ff16fa12d88" Namespace="calico-system" Pod="calico-kube-controllers-5bff9f9fd4-4vmq2" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-calico--kube--controllers--5bff9f9fd4--4vmq2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34cd8b9336-k8s-calico--kube--controllers--5bff9f9fd4--4vmq2-eth0", GenerateName:"calico-kube-controllers-5bff9f9fd4-", Namespace:"calico-system", SelfLink:"", UID:"593908a5-f718-4b03-b095-540ff204a4bd", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 29, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bff9f9fd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34cd8b9336", ContainerID:"", Pod:"calico-kube-controllers-5bff9f9fd4-4vmq2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.114.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali141d6dd5341", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:30:00.298287 env[1561]: 2025-11-01 01:30:00.262 [INFO][4503] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.2/32] ContainerID="f1082a4dde71afea80a2f88419bbf26033601c06bb6b4ba4798b5ff16fa12d88" Namespace="calico-system" Pod="calico-kube-controllers-5bff9f9fd4-4vmq2" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-calico--kube--controllers--5bff9f9fd4--4vmq2-eth0" Nov 1 01:30:00.298287 env[1561]: 2025-11-01 01:30:00.262 [INFO][4503] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali141d6dd5341 ContainerID="f1082a4dde71afea80a2f88419bbf26033601c06bb6b4ba4798b5ff16fa12d88" Namespace="calico-system" Pod="calico-kube-controllers-5bff9f9fd4-4vmq2" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-calico--kube--controllers--5bff9f9fd4--4vmq2-eth0" Nov 1 01:30:00.298287 env[1561]: 2025-11-01 01:30:00.291 [INFO][4503] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f1082a4dde71afea80a2f88419bbf26033601c06bb6b4ba4798b5ff16fa12d88" Namespace="calico-system" Pod="calico-kube-controllers-5bff9f9fd4-4vmq2" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-calico--kube--controllers--5bff9f9fd4--4vmq2-eth0" Nov 1 01:30:00.298287 env[1561]: 2025-11-01 01:30:00.291 [INFO][4503] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f1082a4dde71afea80a2f88419bbf26033601c06bb6b4ba4798b5ff16fa12d88" Namespace="calico-system" Pod="calico-kube-controllers-5bff9f9fd4-4vmq2" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-calico--kube--controllers--5bff9f9fd4--4vmq2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34cd8b9336-k8s-calico--kube--controllers--5bff9f9fd4--4vmq2-eth0", GenerateName:"calico-kube-controllers-5bff9f9fd4-", Namespace:"calico-system", SelfLink:"", UID:"593908a5-f718-4b03-b095-540ff204a4bd", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 29, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bff9f9fd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34cd8b9336", ContainerID:"f1082a4dde71afea80a2f88419bbf26033601c06bb6b4ba4798b5ff16fa12d88", Pod:"calico-kube-controllers-5bff9f9fd4-4vmq2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.114.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali141d6dd5341", MAC:"92:b0:39:8f:bf:8e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:30:00.298287 env[1561]: 2025-11-01 01:30:00.297 [INFO][4503] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f1082a4dde71afea80a2f88419bbf26033601c06bb6b4ba4798b5ff16fa12d88" Namespace="calico-system" Pod="calico-kube-controllers-5bff9f9fd4-4vmq2" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-calico--kube--controllers--5bff9f9fd4--4vmq2-eth0" Nov 1 01:30:00.302492 env[1561]: time="2025-11-01T01:30:00.302438006Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:30:00.302492 env[1561]: time="2025-11-01T01:30:00.302477159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:30:00.302492 env[1561]: time="2025-11-01T01:30:00.302485741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:30:00.302662 env[1561]: time="2025-11-01T01:30:00.302619194Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f1082a4dde71afea80a2f88419bbf26033601c06bb6b4ba4798b5ff16fa12d88 pid=4555 runtime=io.containerd.runc.v2 Nov 1 01:30:00.302684 kubelet[2505]: E1101 01:30:00.302536 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649c6d6f48-6pq8q" podUID="2743a542-7119-47a8-937d-fec5c85bdcf2" Nov 1 01:30:00.302000 audit[4565]: NETFILTER_CFG table=filter:105 family=2 entries=36 op=nft_register_chain pid=4565 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 01:30:00.302000 audit[4565]: SYSCALL arch=c000003e syscall=46 success=yes exit=19576 a0=3 a1=7ffd2a5a2a80 a2=0 a3=7ffd2a5a2a6c items=0 ppid=4216 pid=4565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:00.302000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 01:30:00.306454 systemd-networkd[1315]: vxlan.calico: Gained IPv6LL Nov 1 01:30:00.309881 systemd[1]: Started cri-containerd-f1082a4dde71afea80a2f88419bbf26033601c06bb6b4ba4798b5ff16fa12d88.scope. Nov 1 01:30:00.313000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:00.313000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:00.313000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:00.313000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:00.313000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:00.313000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:00.313000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:00.313000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:00.313000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:00.313000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:00.313000 audit: BPF prog-id=168 op=LOAD Nov 1 01:30:00.313000 audit[4567]: AVC avc: denied { bpf } for pid=4567 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:00.313000 audit[4567]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=4555 pid=4567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:00.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631303832613464646537316166656138306132663838343139626266 Nov 1 01:30:00.313000 audit[4567]: AVC avc: denied { perfmon } for pid=4567 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:00.313000 audit[4567]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=8 items=0 ppid=4555 pid=4567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:00.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631303832613464646537316166656138306132663838343139626266 Nov 1 01:30:00.313000 audit[4567]: AVC avc: denied { bpf } for pid=4567 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:00.313000 audit[4567]: AVC avc: denied { bpf } for pid=4567 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:00.313000 audit[4567]: AVC avc: denied { bpf } for pid=4567 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:00.313000 audit[4567]: AVC avc: denied { perfmon } for pid=4567 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:00.313000 audit[4567]: AVC avc: denied { perfmon } for pid=4567 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:00.313000 audit[4567]: AVC avc: denied { perfmon } for pid=4567 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:00.313000 audit[4567]: AVC avc: denied { perfmon } for pid=4567 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:00.313000 audit[4567]: AVC avc: denied { perfmon } for pid=4567 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:00.313000 audit[4567]: AVC avc: denied { bpf } for pid=4567 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:00.313000 audit[4567]: AVC avc: denied { bpf } for pid=4567 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:00.313000 audit: BPF prog-id=169 op=LOAD Nov 1 01:30:00.313000 audit[4567]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001979d8 a2=78 a3=c000220f60 items=0 ppid=4555 pid=4567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:00.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631303832613464646537316166656138306132663838343139626266 Nov 1 01:30:00.313000 audit[4567]: AVC avc: denied { bpf } for pid=4567 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:00.313000 audit[4567]: AVC avc: denied { bpf } for pid=4567 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:00.313000 audit[4567]: AVC avc: denied { perfmon } for pid=4567 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:00.313000 audit[4567]: AVC avc: denied { perfmon } for pid=4567 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:00.313000 audit[4567]: AVC avc: denied { perfmon } for pid=4567 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:00.313000 audit[4567]: AVC avc: denied { perfmon } for pid=4567 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:00.313000 audit[4567]: AVC avc: denied { perfmon } for pid=4567 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:00.313000 audit[4567]: AVC avc: denied { bpf } for pid=4567 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:00.313000 audit[4567]: AVC avc: denied { bpf } for pid=4567 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:00.313000 audit: BPF prog-id=170 op=LOAD Nov 1 01:30:00.313000 audit[4567]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000197770 a2=78 a3=c000220fa8 items=0 ppid=4555 pid=4567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:00.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631303832613464646537316166656138306132663838343139626266 Nov 1 01:30:00.313000 audit: BPF prog-id=170 op=UNLOAD Nov 1 01:30:00.313000 audit: BPF prog-id=169 op=UNLOAD Nov 1 01:30:00.313000 audit[4567]: AVC avc: denied { bpf } for pid=4567 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:00.313000 audit[4567]: AVC avc: denied { bpf } for pid=4567 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:00.313000 audit[4567]: AVC avc: denied { bpf } for pid=4567 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:00.313000 audit[4567]: AVC avc: denied { perfmon } for pid=4567 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:00.313000 audit[4567]: AVC avc: denied { perfmon } for pid=4567 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:00.313000 audit[4567]: AVC avc: denied { perfmon } for pid=4567 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:00.313000 audit[4567]: AVC avc: denied { perfmon } for pid=4567 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:00.313000 audit[4567]: AVC avc: denied { perfmon } for pid=4567 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:00.313000 audit[4567]: AVC avc: denied { bpf } for pid=4567 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:00.313000 audit[4567]: AVC avc: denied { bpf } for pid=4567 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:00.313000 audit: BPF prog-id=171 op=LOAD Nov 1 01:30:00.313000 audit[4567]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000197c30 a2=78 a3=c0002213b8 items=0 ppid=4555 pid=4567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:00.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631303832613464646537316166656138306132663838343139626266 Nov 1 01:30:00.325000 audit[4584]: NETFILTER_CFG table=filter:106 family=2 entries=20 op=nft_register_rule pid=4584 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:30:00.325000 audit[4584]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd3416d790 a2=0 a3=7ffd3416d77c items=0 ppid=2709 pid=4584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:00.325000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:30:00.330994 env[1561]: time="2025-11-01T01:30:00.330967852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bff9f9fd4-4vmq2,Uid:593908a5-f718-4b03-b095-540ff204a4bd,Namespace:calico-system,Attempt:1,} returns sandbox id \"f1082a4dde71afea80a2f88419bbf26033601c06bb6b4ba4798b5ff16fa12d88\"" Nov 1 01:30:00.331716 env[1561]: time="2025-11-01T01:30:00.331700762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 01:30:00.336000 audit[4584]: NETFILTER_CFG table=nat:107 family=2 entries=14 op=nft_register_rule pid=4584 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:30:00.336000 audit[4584]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd3416d790 a2=0 a3=0 items=0 ppid=2709 pid=4584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:00.336000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:30:00.701387 env[1561]: time="2025-11-01T01:30:00.701281727Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:30:00.707596 env[1561]: time="2025-11-01T01:30:00.707444337Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 01:30:00.708269 kubelet[2505]: E1101 01:30:00.707853 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:30:00.708269 kubelet[2505]: E1101 01:30:00.707936 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:30:00.708269 kubelet[2505]: E1101 01:30:00.708111 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5bff9f9fd4-4vmq2_calico-system(593908a5-f718-4b03-b095-540ff204a4bd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 01:30:00.708269 kubelet[2505]: E1101 01:30:00.708205 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bff9f9fd4-4vmq2" podUID="593908a5-f718-4b03-b095-540ff204a4bd" Nov 1 01:30:01.018321 kubelet[2505]: I1101 01:30:01.018087 2505 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 01:30:01.154944 env[1561]: time="2025-11-01T01:30:01.154797208Z" level=info msg="StopPodSandbox for \"6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7\"" Nov 1 01:30:01.154944 env[1561]: time="2025-11-01T01:30:01.154867251Z" level=info msg="StopPodSandbox for \"a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102\"" Nov 1 01:30:01.248754 env[1561]: 2025-11-01 01:30:01.223 [INFO][4677] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7" Nov 1 01:30:01.248754 env[1561]: 2025-11-01 01:30:01.223 [INFO][4677] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7" iface="eth0" netns="/var/run/netns/cni-7eecfdc4-8cb3-826a-b61b-2ae0aedffbdd" Nov 1 01:30:01.248754 env[1561]: 2025-11-01 01:30:01.223 [INFO][4677] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7" iface="eth0" netns="/var/run/netns/cni-7eecfdc4-8cb3-826a-b61b-2ae0aedffbdd" Nov 1 01:30:01.248754 env[1561]: 2025-11-01 01:30:01.223 [INFO][4677] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7" iface="eth0" netns="/var/run/netns/cni-7eecfdc4-8cb3-826a-b61b-2ae0aedffbdd" Nov 1 01:30:01.248754 env[1561]: 2025-11-01 01:30:01.223 [INFO][4677] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7" Nov 1 01:30:01.248754 env[1561]: 2025-11-01 01:30:01.223 [INFO][4677] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7" Nov 1 01:30:01.248754 env[1561]: 2025-11-01 01:30:01.239 [INFO][4712] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7" HandleID="k8s-pod-network.6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7" Workload="ci--3510.3.8--n--34cd8b9336-k8s-csi--node--driver--9wz7k-eth0" Nov 1 01:30:01.248754 env[1561]: 2025-11-01 01:30:01.239 [INFO][4712] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:30:01.248754 env[1561]: 2025-11-01 01:30:01.239 [INFO][4712] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:30:01.248754 env[1561]: 2025-11-01 01:30:01.245 [WARNING][4712] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7" HandleID="k8s-pod-network.6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7" Workload="ci--3510.3.8--n--34cd8b9336-k8s-csi--node--driver--9wz7k-eth0" Nov 1 01:30:01.248754 env[1561]: 2025-11-01 01:30:01.245 [INFO][4712] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7" HandleID="k8s-pod-network.6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7" Workload="ci--3510.3.8--n--34cd8b9336-k8s-csi--node--driver--9wz7k-eth0" Nov 1 01:30:01.248754 env[1561]: 2025-11-01 01:30:01.246 [INFO][4712] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:30:01.248754 env[1561]: 2025-11-01 01:30:01.247 [INFO][4677] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7" Nov 1 01:30:01.249223 env[1561]: time="2025-11-01T01:30:01.248830981Z" level=info msg="TearDown network for sandbox \"6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7\" successfully" Nov 1 01:30:01.249223 env[1561]: time="2025-11-01T01:30:01.248858435Z" level=info msg="StopPodSandbox for \"6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7\" returns successfully" Nov 1 01:30:01.250573 env[1561]: time="2025-11-01T01:30:01.250543899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9wz7k,Uid:79df0ba2-6e86-422c-8f93-652dfb942b69,Namespace:calico-system,Attempt:1,}" Nov 1 01:30:01.251705 systemd[1]: run-netns-cni\x2d7eecfdc4\x2d8cb3\x2d826a\x2db61b\x2d2ae0aedffbdd.mount: Deactivated successfully. Nov 1 01:30:01.256201 env[1561]: 2025-11-01 01:30:01.225 [INFO][4678] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102" Nov 1 01:30:01.256201 env[1561]: 2025-11-01 01:30:01.225 [INFO][4678] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102" iface="eth0" netns="/var/run/netns/cni-5b03a1f9-0a08-3541-308d-fb70d902a41a" Nov 1 01:30:01.256201 env[1561]: 2025-11-01 01:30:01.225 [INFO][4678] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102" iface="eth0" netns="/var/run/netns/cni-5b03a1f9-0a08-3541-308d-fb70d902a41a" Nov 1 01:30:01.256201 env[1561]: 2025-11-01 01:30:01.225 [INFO][4678] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102" iface="eth0" netns="/var/run/netns/cni-5b03a1f9-0a08-3541-308d-fb70d902a41a" Nov 1 01:30:01.256201 env[1561]: 2025-11-01 01:30:01.225 [INFO][4678] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102" Nov 1 01:30:01.256201 env[1561]: 2025-11-01 01:30:01.225 [INFO][4678] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102" Nov 1 01:30:01.256201 env[1561]: 2025-11-01 01:30:01.240 [INFO][4718] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102" HandleID="k8s-pod-network.a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102" Workload="ci--3510.3.8--n--34cd8b9336-k8s-goldmane--7c778bb748--6xhxh-eth0" Nov 1 01:30:01.256201 env[1561]: 2025-11-01 01:30:01.240 [INFO][4718] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:30:01.256201 env[1561]: 2025-11-01 01:30:01.246 [INFO][4718] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:30:01.256201 env[1561]: 2025-11-01 01:30:01.252 [WARNING][4718] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102" HandleID="k8s-pod-network.a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102" Workload="ci--3510.3.8--n--34cd8b9336-k8s-goldmane--7c778bb748--6xhxh-eth0" Nov 1 01:30:01.256201 env[1561]: 2025-11-01 01:30:01.252 [INFO][4718] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102" HandleID="k8s-pod-network.a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102" Workload="ci--3510.3.8--n--34cd8b9336-k8s-goldmane--7c778bb748--6xhxh-eth0" Nov 1 01:30:01.256201 env[1561]: 2025-11-01 01:30:01.253 [INFO][4718] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:30:01.256201 env[1561]: 2025-11-01 01:30:01.254 [INFO][4678] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102" Nov 1 01:30:01.256731 env[1561]: time="2025-11-01T01:30:01.256263644Z" level=info msg="TearDown network for sandbox \"a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102\" successfully" Nov 1 01:30:01.256731 env[1561]: time="2025-11-01T01:30:01.256285939Z" level=info msg="StopPodSandbox for \"a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102\" returns successfully" Nov 1 01:30:01.257476 env[1561]: time="2025-11-01T01:30:01.257449582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-6xhxh,Uid:9aac0066-097a-4582-8ce8-a3a1ddb41b3d,Namespace:calico-system,Attempt:1,}" Nov 1 01:30:01.261159 systemd[1]: run-netns-cni\x2d5b03a1f9\x2d0a08\x2d3541\x2d308d\x2dfb70d902a41a.mount: Deactivated successfully. Nov 1 01:30:01.305017 kubelet[2505]: E1101 01:30:01.304936 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bff9f9fd4-4vmq2" podUID="593908a5-f718-4b03-b095-540ff204a4bd" Nov 1 01:30:01.325648 systemd-networkd[1315]: cali35a2f823f05: Link UP Nov 1 01:30:01.333120 env[1561]: 2025-11-01 01:30:01.280 [INFO][4744] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--34cd8b9336-k8s-csi--node--driver--9wz7k-eth0 csi-node-driver- calico-system 79df0ba2-6e86-422c-8f93-652dfb942b69 923 0 2025-11-01 01:29:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-3510.3.8-n-34cd8b9336 csi-node-driver-9wz7k eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali35a2f823f05 [] [] }} ContainerID="95b233e8b63b35cb3bea9c5be7d405b67e7b248db1e04a93cca0d1e0618226bf" Namespace="calico-system" Pod="csi-node-driver-9wz7k" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-csi--node--driver--9wz7k-" Nov 1 01:30:01.333120 env[1561]: 2025-11-01 01:30:01.280 [INFO][4744] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="95b233e8b63b35cb3bea9c5be7d405b67e7b248db1e04a93cca0d1e0618226bf" Namespace="calico-system" Pod="csi-node-driver-9wz7k" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-csi--node--driver--9wz7k-eth0" Nov 1 01:30:01.333120 env[1561]: 2025-11-01 01:30:01.297 [INFO][4793] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="95b233e8b63b35cb3bea9c5be7d405b67e7b248db1e04a93cca0d1e0618226bf" HandleID="k8s-pod-network.95b233e8b63b35cb3bea9c5be7d405b67e7b248db1e04a93cca0d1e0618226bf" Workload="ci--3510.3.8--n--34cd8b9336-k8s-csi--node--driver--9wz7k-eth0" Nov 1 01:30:01.333120 env[1561]: 2025-11-01 01:30:01.297 [INFO][4793] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="95b233e8b63b35cb3bea9c5be7d405b67e7b248db1e04a93cca0d1e0618226bf" HandleID="k8s-pod-network.95b233e8b63b35cb3bea9c5be7d405b67e7b248db1e04a93cca0d1e0618226bf" Workload="ci--3510.3.8--n--34cd8b9336-k8s-csi--node--driver--9wz7k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000351d50), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-34cd8b9336", "pod":"csi-node-driver-9wz7k", "timestamp":"2025-11-01 01:30:01.297264813 +0000 UTC"}, Hostname:"ci-3510.3.8-n-34cd8b9336", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:30:01.333120 env[1561]: 2025-11-01 01:30:01.297 [INFO][4793] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:30:01.333120 env[1561]: 2025-11-01 01:30:01.297 [INFO][4793] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:30:01.333120 env[1561]: 2025-11-01 01:30:01.297 [INFO][4793] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-34cd8b9336' Nov 1 01:30:01.333120 env[1561]: 2025-11-01 01:30:01.303 [INFO][4793] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.95b233e8b63b35cb3bea9c5be7d405b67e7b248db1e04a93cca0d1e0618226bf" host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:01.333120 env[1561]: 2025-11-01 01:30:01.307 [INFO][4793] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:01.333120 env[1561]: 2025-11-01 01:30:01.311 [INFO][4793] ipam/ipam.go 511: Trying affinity for 192.168.114.0/26 host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:01.333120 env[1561]: 2025-11-01 01:30:01.312 [INFO][4793] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.0/26 host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:01.333120 env[1561]: 2025-11-01 01:30:01.314 [INFO][4793] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:01.333120 env[1561]: 2025-11-01 01:30:01.314 [INFO][4793] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.95b233e8b63b35cb3bea9c5be7d405b67e7b248db1e04a93cca0d1e0618226bf" host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:01.333120 env[1561]: 2025-11-01 01:30:01.315 [INFO][4793] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.95b233e8b63b35cb3bea9c5be7d405b67e7b248db1e04a93cca0d1e0618226bf Nov 1 01:30:01.333120 env[1561]: 2025-11-01 01:30:01.318 [INFO][4793] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.95b233e8b63b35cb3bea9c5be7d405b67e7b248db1e04a93cca0d1e0618226bf" host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:01.333120 env[1561]: 2025-11-01 01:30:01.322 [INFO][4793] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.3/26] block=192.168.114.0/26 handle="k8s-pod-network.95b233e8b63b35cb3bea9c5be7d405b67e7b248db1e04a93cca0d1e0618226bf" host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:01.333120 env[1561]: 2025-11-01 01:30:01.322 [INFO][4793] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.3/26] handle="k8s-pod-network.95b233e8b63b35cb3bea9c5be7d405b67e7b248db1e04a93cca0d1e0618226bf" host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:01.333120 env[1561]: 2025-11-01 01:30:01.322 [INFO][4793] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:30:01.333120 env[1561]: 2025-11-01 01:30:01.322 [INFO][4793] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.3/26] IPv6=[] ContainerID="95b233e8b63b35cb3bea9c5be7d405b67e7b248db1e04a93cca0d1e0618226bf" HandleID="k8s-pod-network.95b233e8b63b35cb3bea9c5be7d405b67e7b248db1e04a93cca0d1e0618226bf" Workload="ci--3510.3.8--n--34cd8b9336-k8s-csi--node--driver--9wz7k-eth0" Nov 1 01:30:01.333724 env[1561]: 2025-11-01 01:30:01.323 [INFO][4744] cni-plugin/k8s.go 418: Populated endpoint ContainerID="95b233e8b63b35cb3bea9c5be7d405b67e7b248db1e04a93cca0d1e0618226bf" Namespace="calico-system" Pod="csi-node-driver-9wz7k" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-csi--node--driver--9wz7k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34cd8b9336-k8s-csi--node--driver--9wz7k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"79df0ba2-6e86-422c-8f93-652dfb942b69", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 29, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34cd8b9336", ContainerID:"", Pod:"csi-node-driver-9wz7k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali35a2f823f05", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:30:01.333724 env[1561]: 2025-11-01 01:30:01.323 [INFO][4744] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.3/32] ContainerID="95b233e8b63b35cb3bea9c5be7d405b67e7b248db1e04a93cca0d1e0618226bf" Namespace="calico-system" Pod="csi-node-driver-9wz7k" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-csi--node--driver--9wz7k-eth0" Nov 1 01:30:01.333724 env[1561]: 2025-11-01 01:30:01.323 [INFO][4744] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali35a2f823f05 ContainerID="95b233e8b63b35cb3bea9c5be7d405b67e7b248db1e04a93cca0d1e0618226bf" Namespace="calico-system" Pod="csi-node-driver-9wz7k" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-csi--node--driver--9wz7k-eth0" Nov 1 01:30:01.333724 env[1561]: 2025-11-01 01:30:01.325 [INFO][4744] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="95b233e8b63b35cb3bea9c5be7d405b67e7b248db1e04a93cca0d1e0618226bf" Namespace="calico-system" Pod="csi-node-driver-9wz7k" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-csi--node--driver--9wz7k-eth0" Nov 1 01:30:01.333724 env[1561]: 2025-11-01 01:30:01.325 [INFO][4744] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="95b233e8b63b35cb3bea9c5be7d405b67e7b248db1e04a93cca0d1e0618226bf" Namespace="calico-system" Pod="csi-node-driver-9wz7k" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-csi--node--driver--9wz7k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34cd8b9336-k8s-csi--node--driver--9wz7k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"79df0ba2-6e86-422c-8f93-652dfb942b69", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 29, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34cd8b9336", ContainerID:"95b233e8b63b35cb3bea9c5be7d405b67e7b248db1e04a93cca0d1e0618226bf", Pod:"csi-node-driver-9wz7k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali35a2f823f05", MAC:"d6:30:52:6e:13:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:30:01.333724 env[1561]: 2025-11-01 01:30:01.331 [INFO][4744] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="95b233e8b63b35cb3bea9c5be7d405b67e7b248db1e04a93cca0d1e0618226bf" Namespace="calico-system" Pod="csi-node-driver-9wz7k" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-csi--node--driver--9wz7k-eth0" Nov 1 01:30:01.339345 env[1561]: time="2025-11-01T01:30:01.339301826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:30:01.339345 env[1561]: time="2025-11-01T01:30:01.339331128Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:30:01.339345 env[1561]: time="2025-11-01T01:30:01.339346197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:30:01.339504 env[1561]: time="2025-11-01T01:30:01.339442227Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/95b233e8b63b35cb3bea9c5be7d405b67e7b248db1e04a93cca0d1e0618226bf pid=4836 runtime=io.containerd.runc.v2 Nov 1 01:30:01.346780 systemd[1]: Started cri-containerd-95b233e8b63b35cb3bea9c5be7d405b67e7b248db1e04a93cca0d1e0618226bf.scope. Nov 1 01:30:01.352408 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 01:30:01.352463 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali35a2f823f05: link becomes ready Nov 1 01:30:01.359000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.359000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.359000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.359000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.359000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.359000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.359000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.359000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.359000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.378000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.378000 audit: BPF prog-id=172 op=LOAD Nov 1 01:30:01.379835 systemd-networkd[1315]: cali35a2f823f05: Gained carrier Nov 1 01:30:01.378000 audit[4845]: AVC avc: denied { bpf } for pid=4845 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.378000 audit[4845]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=4836 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:01.378000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935623233336538623633623335636233626561396335626537643430 Nov 1 01:30:01.378000 audit[4845]: AVC avc: denied { perfmon } for pid=4845 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.378000 audit[4845]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=4836 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:01.378000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935623233336538623633623335636233626561396335626537643430 Nov 1 01:30:01.378000 audit[4845]: AVC avc: denied { bpf } for pid=4845 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.378000 audit[4845]: AVC avc: denied { bpf } for pid=4845 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.378000 audit[4845]: AVC avc: denied { bpf } for pid=4845 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.378000 audit[4845]: AVC avc: denied { perfmon } for pid=4845 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.378000 audit[4845]: AVC avc: denied { perfmon } for pid=4845 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.378000 audit[4845]: AVC avc: denied { perfmon } for pid=4845 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.378000 audit[4845]: AVC avc: denied { perfmon } for pid=4845 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.378000 audit[4845]: AVC avc: denied { perfmon } for pid=4845 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.378000 audit[4845]: AVC avc: denied { bpf } for pid=4845 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.378000 audit[4845]: AVC avc: denied { bpf } for pid=4845 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.378000 audit: BPF prog-id=173 op=LOAD Nov 1 01:30:01.378000 audit[4845]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c00032eae0 items=0 ppid=4836 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:01.378000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935623233336538623633623335636233626561396335626537643430 Nov 1 01:30:01.378000 audit[4845]: AVC avc: denied { bpf } for pid=4845 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.378000 audit[4845]: AVC avc: denied { bpf } for pid=4845 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.378000 audit[4845]: AVC avc: denied { perfmon } for pid=4845 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.378000 audit[4845]: AVC avc: denied { perfmon } for pid=4845 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.378000 audit[4845]: AVC avc: denied { perfmon } for pid=4845 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.378000 audit[4845]: AVC avc: denied { perfmon } for pid=4845 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.378000 audit[4845]: AVC avc: denied { perfmon } for pid=4845 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.378000 audit[4845]: AVC avc: denied { bpf } for pid=4845 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.378000 audit[4845]: AVC avc: denied { bpf } for pid=4845 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.378000 audit: BPF prog-id=174 op=LOAD Nov 1 01:30:01.378000 audit[4845]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c00032eb28 items=0 ppid=4836 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:01.378000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935623233336538623633623335636233626561396335626537643430 Nov 1 01:30:01.378000 audit: BPF prog-id=174 op=UNLOAD Nov 1 01:30:01.378000 audit: BPF prog-id=173 op=UNLOAD Nov 1 01:30:01.378000 audit[4845]: AVC avc: denied { bpf } for pid=4845 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.378000 audit[4845]: AVC avc: denied { bpf } for pid=4845 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.378000 audit[4845]: AVC avc: denied { bpf } for pid=4845 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.378000 audit[4845]: AVC avc: denied { perfmon } for pid=4845 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.378000 audit[4845]: AVC avc: denied { perfmon } for pid=4845 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.378000 audit[4845]: AVC avc: denied { perfmon } for pid=4845 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.378000 audit[4845]: AVC avc: denied { perfmon } for pid=4845 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.378000 audit[4845]: AVC avc: denied { perfmon } for pid=4845 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.378000 audit[4845]: AVC avc: denied { bpf } for pid=4845 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.378000 audit[4845]: AVC avc: denied { bpf } for pid=4845 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.378000 audit: BPF prog-id=175 op=LOAD Nov 1 01:30:01.378000 audit[4845]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c00032ef38 items=0 ppid=4836 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:01.378000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935623233336538623633623335636233626561396335626537643430 Nov 1 01:30:01.385645 env[1561]: time="2025-11-01T01:30:01.385615803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9wz7k,Uid:79df0ba2-6e86-422c-8f93-652dfb942b69,Namespace:calico-system,Attempt:1,} returns sandbox id \"95b233e8b63b35cb3bea9c5be7d405b67e7b248db1e04a93cca0d1e0618226bf\"" Nov 1 01:30:01.386482 env[1561]: time="2025-11-01T01:30:01.386468209Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 01:30:01.385000 audit[4872]: NETFILTER_CFG table=filter:108 family=2 entries=46 op=nft_register_chain pid=4872 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 01:30:01.385000 audit[4872]: SYSCALL arch=c000003e syscall=46 success=yes exit=23616 a0=3 a1=7fff31303bd0 a2=0 a3=7fff31303bbc items=0 ppid=4216 pid=4872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:01.385000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 01:30:01.422738 systemd-networkd[1315]: calib3833dab0fe: Link UP Nov 1 01:30:01.450060 systemd-networkd[1315]: calib3833dab0fe: Gained carrier Nov 1 01:30:01.450404 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calib3833dab0fe: link becomes ready Nov 1 01:30:01.473967 env[1561]: 2025-11-01 01:30:01.284 [INFO][4754] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--34cd8b9336-k8s-goldmane--7c778bb748--6xhxh-eth0 goldmane-7c778bb748- calico-system 9aac0066-097a-4582-8ce8-a3a1ddb41b3d 924 0 2025-11-01 01:29:35 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-3510.3.8-n-34cd8b9336 goldmane-7c778bb748-6xhxh eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib3833dab0fe [] [] }} ContainerID="3c3be7ec870339318426b24c09da679d9b235cadbdd1038b2586d3b5b91d9f26" Namespace="calico-system" Pod="goldmane-7c778bb748-6xhxh" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-goldmane--7c778bb748--6xhxh-" Nov 1 01:30:01.473967 env[1561]: 2025-11-01 01:30:01.284 [INFO][4754] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3c3be7ec870339318426b24c09da679d9b235cadbdd1038b2586d3b5b91d9f26" Namespace="calico-system" Pod="goldmane-7c778bb748-6xhxh" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-goldmane--7c778bb748--6xhxh-eth0" Nov 1 01:30:01.473967 env[1561]: 2025-11-01 01:30:01.301 [INFO][4799] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3c3be7ec870339318426b24c09da679d9b235cadbdd1038b2586d3b5b91d9f26" HandleID="k8s-pod-network.3c3be7ec870339318426b24c09da679d9b235cadbdd1038b2586d3b5b91d9f26" Workload="ci--3510.3.8--n--34cd8b9336-k8s-goldmane--7c778bb748--6xhxh-eth0" Nov 1 01:30:01.473967 env[1561]: 2025-11-01 01:30:01.301 [INFO][4799] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3c3be7ec870339318426b24c09da679d9b235cadbdd1038b2586d3b5b91d9f26" HandleID="k8s-pod-network.3c3be7ec870339318426b24c09da679d9b235cadbdd1038b2586d3b5b91d9f26" Workload="ci--3510.3.8--n--34cd8b9336-k8s-goldmane--7c778bb748--6xhxh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000483f60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-34cd8b9336", "pod":"goldmane-7c778bb748-6xhxh", "timestamp":"2025-11-01 01:30:01.301811299 +0000 UTC"}, Hostname:"ci-3510.3.8-n-34cd8b9336", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:30:01.473967 env[1561]: 2025-11-01 01:30:01.301 [INFO][4799] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:30:01.473967 env[1561]: 2025-11-01 01:30:01.322 [INFO][4799] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:30:01.473967 env[1561]: 2025-11-01 01:30:01.322 [INFO][4799] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-34cd8b9336' Nov 1 01:30:01.473967 env[1561]: 2025-11-01 01:30:01.403 [INFO][4799] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3c3be7ec870339318426b24c09da679d9b235cadbdd1038b2586d3b5b91d9f26" host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:01.473967 env[1561]: 2025-11-01 01:30:01.407 [INFO][4799] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:01.473967 env[1561]: 2025-11-01 01:30:01.411 [INFO][4799] ipam/ipam.go 511: Trying affinity for 192.168.114.0/26 host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:01.473967 env[1561]: 2025-11-01 01:30:01.412 [INFO][4799] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.0/26 host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:01.473967 env[1561]: 2025-11-01 01:30:01.414 [INFO][4799] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:01.473967 env[1561]: 2025-11-01 01:30:01.414 [INFO][4799] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.3c3be7ec870339318426b24c09da679d9b235cadbdd1038b2586d3b5b91d9f26" host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:01.473967 env[1561]: 2025-11-01 01:30:01.415 [INFO][4799] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3c3be7ec870339318426b24c09da679d9b235cadbdd1038b2586d3b5b91d9f26 Nov 1 01:30:01.473967 env[1561]: 2025-11-01 01:30:01.418 [INFO][4799] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.3c3be7ec870339318426b24c09da679d9b235cadbdd1038b2586d3b5b91d9f26" host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:01.473967 env[1561]: 2025-11-01 01:30:01.421 [INFO][4799] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.4/26] block=192.168.114.0/26 handle="k8s-pod-network.3c3be7ec870339318426b24c09da679d9b235cadbdd1038b2586d3b5b91d9f26" host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:01.473967 env[1561]: 2025-11-01 01:30:01.421 [INFO][4799] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.4/26] handle="k8s-pod-network.3c3be7ec870339318426b24c09da679d9b235cadbdd1038b2586d3b5b91d9f26" host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:01.473967 env[1561]: 2025-11-01 01:30:01.421 [INFO][4799] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:30:01.473967 env[1561]: 2025-11-01 01:30:01.421 [INFO][4799] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.4/26] IPv6=[] ContainerID="3c3be7ec870339318426b24c09da679d9b235cadbdd1038b2586d3b5b91d9f26" HandleID="k8s-pod-network.3c3be7ec870339318426b24c09da679d9b235cadbdd1038b2586d3b5b91d9f26" Workload="ci--3510.3.8--n--34cd8b9336-k8s-goldmane--7c778bb748--6xhxh-eth0" Nov 1 01:30:01.474424 env[1561]: 2025-11-01 01:30:01.421 [INFO][4754] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3c3be7ec870339318426b24c09da679d9b235cadbdd1038b2586d3b5b91d9f26" Namespace="calico-system" Pod="goldmane-7c778bb748-6xhxh" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-goldmane--7c778bb748--6xhxh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34cd8b9336-k8s-goldmane--7c778bb748--6xhxh-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"9aac0066-097a-4582-8ce8-a3a1ddb41b3d", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 29, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34cd8b9336", ContainerID:"", Pod:"goldmane-7c778bb748-6xhxh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.114.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib3833dab0fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:30:01.474424 env[1561]: 2025-11-01 01:30:01.422 [INFO][4754] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.4/32] ContainerID="3c3be7ec870339318426b24c09da679d9b235cadbdd1038b2586d3b5b91d9f26" Namespace="calico-system" Pod="goldmane-7c778bb748-6xhxh" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-goldmane--7c778bb748--6xhxh-eth0" Nov 1 01:30:01.474424 env[1561]: 2025-11-01 01:30:01.422 [INFO][4754] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib3833dab0fe ContainerID="3c3be7ec870339318426b24c09da679d9b235cadbdd1038b2586d3b5b91d9f26" Namespace="calico-system" Pod="goldmane-7c778bb748-6xhxh" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-goldmane--7c778bb748--6xhxh-eth0" Nov 1 01:30:01.474424 env[1561]: 2025-11-01 01:30:01.450 [INFO][4754] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3c3be7ec870339318426b24c09da679d9b235cadbdd1038b2586d3b5b91d9f26" Namespace="calico-system" Pod="goldmane-7c778bb748-6xhxh" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-goldmane--7c778bb748--6xhxh-eth0" Nov 1 01:30:01.474424 env[1561]: 2025-11-01 01:30:01.450 [INFO][4754] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3c3be7ec870339318426b24c09da679d9b235cadbdd1038b2586d3b5b91d9f26" Namespace="calico-system" Pod="goldmane-7c778bb748-6xhxh" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-goldmane--7c778bb748--6xhxh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34cd8b9336-k8s-goldmane--7c778bb748--6xhxh-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"9aac0066-097a-4582-8ce8-a3a1ddb41b3d", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 29, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34cd8b9336", ContainerID:"3c3be7ec870339318426b24c09da679d9b235cadbdd1038b2586d3b5b91d9f26", Pod:"goldmane-7c778bb748-6xhxh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.114.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib3833dab0fe", MAC:"a6:f6:89:e6:56:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:30:01.474424 env[1561]: 2025-11-01 01:30:01.473 [INFO][4754] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3c3be7ec870339318426b24c09da679d9b235cadbdd1038b2586d3b5b91d9f26" Namespace="calico-system" Pod="goldmane-7c778bb748-6xhxh" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-goldmane--7c778bb748--6xhxh-eth0" Nov 1 01:30:01.478852 env[1561]: time="2025-11-01T01:30:01.478792034Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:30:01.478852 env[1561]: time="2025-11-01T01:30:01.478815295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:30:01.478852 env[1561]: time="2025-11-01T01:30:01.478822310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:30:01.478963 env[1561]: time="2025-11-01T01:30:01.478885127Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3c3be7ec870339318426b24c09da679d9b235cadbdd1038b2586d3b5b91d9f26 pid=4889 runtime=io.containerd.runc.v2 Nov 1 01:30:01.479000 audit[4899]: NETFILTER_CFG table=filter:109 family=2 entries=48 op=nft_register_chain pid=4899 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 01:30:01.479000 audit[4899]: SYSCALL arch=c000003e syscall=46 success=yes exit=26352 a0=3 a1=7ffc57fbe500 a2=0 a3=7ffc57fbe4ec items=0 ppid=4216 pid=4899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:01.479000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 01:30:01.484419 systemd[1]: Started cri-containerd-3c3be7ec870339318426b24c09da679d9b235cadbdd1038b2586d3b5b91d9f26.scope. Nov 1 01:30:01.489000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.489000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.489000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.489000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.489000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.489000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.489000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.489000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.489000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.489000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.489000 audit: BPF prog-id=176 op=LOAD Nov 1 01:30:01.489000 audit[4900]: AVC avc: denied { bpf } for pid=4900 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.489000 audit[4900]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000147c48 a2=10 a3=1c items=0 ppid=4889 pid=4900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:01.489000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363336265376563383730333339333138343236623234633039646136 Nov 1 01:30:01.489000 audit[4900]: AVC avc: denied { perfmon } for pid=4900 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.489000 audit[4900]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001476b0 a2=3c a3=c items=0 ppid=4889 pid=4900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:01.489000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363336265376563383730333339333138343236623234633039646136 Nov 1 01:30:01.489000 audit[4900]: AVC avc: denied { bpf } for pid=4900 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.489000 audit[4900]: AVC avc: denied { bpf } for pid=4900 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.489000 audit[4900]: AVC avc: denied { bpf } for pid=4900 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.489000 audit[4900]: AVC avc: denied { perfmon } for pid=4900 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.489000 audit[4900]: AVC avc: denied { perfmon } for pid=4900 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.489000 audit[4900]: AVC avc: denied { perfmon } for pid=4900 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.489000 audit[4900]: AVC avc: denied { perfmon } for pid=4900 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.489000 audit[4900]: AVC avc: denied { perfmon } for pid=4900 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.489000 audit[4900]: AVC avc: denied { bpf } for pid=4900 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.489000 audit[4900]: AVC avc: denied { bpf } for pid=4900 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.489000 audit: BPF prog-id=177 op=LOAD Nov 1 01:30:01.489000 audit[4900]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001479d8 a2=78 a3=c000214be0 items=0 ppid=4889 pid=4900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:01.489000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363336265376563383730333339333138343236623234633039646136 Nov 1 01:30:01.489000 audit[4900]: AVC avc: denied { bpf } for pid=4900 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.489000 audit[4900]: AVC avc: denied { bpf } for pid=4900 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.489000 audit[4900]: AVC avc: denied { perfmon } for pid=4900 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.489000 audit[4900]: AVC avc: denied { perfmon } for pid=4900 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.489000 audit[4900]: AVC avc: denied { perfmon } for pid=4900 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.489000 audit[4900]: AVC avc: denied { perfmon } for pid=4900 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.489000 audit[4900]: AVC avc: denied { perfmon } for pid=4900 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.489000 audit[4900]: AVC avc: denied { bpf } for pid=4900 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.489000 audit[4900]: AVC avc: denied { bpf } for pid=4900 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.489000 audit: BPF prog-id=178 op=LOAD Nov 1 01:30:01.489000 audit[4900]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000147770 a2=78 a3=c000214c28 items=0 ppid=4889 pid=4900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:01.489000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363336265376563383730333339333138343236623234633039646136 Nov 1 01:30:01.490000 audit: BPF prog-id=178 op=UNLOAD Nov 1 01:30:01.490000 audit: BPF prog-id=177 op=UNLOAD Nov 1 01:30:01.490000 audit[4900]: AVC avc: denied { bpf } for pid=4900 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.490000 audit[4900]: AVC avc: denied { bpf } for pid=4900 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.490000 audit[4900]: AVC avc: denied { bpf } for pid=4900 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.490000 audit[4900]: AVC avc: denied { perfmon } for pid=4900 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.490000 audit[4900]: AVC avc: denied { perfmon } for pid=4900 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.490000 audit[4900]: AVC avc: denied { perfmon } for pid=4900 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.490000 audit[4900]: AVC avc: denied { perfmon } for pid=4900 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.490000 audit[4900]: AVC avc: denied { perfmon } for pid=4900 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.490000 audit[4900]: AVC avc: denied { bpf } for pid=4900 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.490000 audit[4900]: AVC avc: denied { bpf } for pid=4900 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:01.490000 audit: BPF prog-id=179 op=LOAD Nov 1 01:30:01.490000 audit[4900]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000147c30 a2=78 a3=c000215038 items=0 ppid=4889 pid=4900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:01.490000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363336265376563383730333339333138343236623234633039646136 Nov 1 01:30:01.506720 env[1561]: time="2025-11-01T01:30:01.506695486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-6xhxh,Uid:9aac0066-097a-4582-8ce8-a3a1ddb41b3d,Namespace:calico-system,Attempt:1,} returns sandbox id \"3c3be7ec870339318426b24c09da679d9b235cadbdd1038b2586d3b5b91d9f26\"" Nov 1 01:30:01.779381 env[1561]: time="2025-11-01T01:30:01.779237209Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:30:01.780285 env[1561]: time="2025-11-01T01:30:01.780167398Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 01:30:01.780741 kubelet[2505]: E1101 01:30:01.780626 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:30:01.780741 kubelet[2505]: E1101 01:30:01.780713 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:30:01.781699 kubelet[2505]: E1101 01:30:01.781054 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-9wz7k_calico-system(79df0ba2-6e86-422c-8f93-652dfb942b69): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 01:30:01.781922 env[1561]: time="2025-11-01T01:30:01.781497848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 01:30:02.154701 env[1561]: time="2025-11-01T01:30:02.154459724Z" level=info msg="StopPodSandbox for \"4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9\"" Nov 1 01:30:02.154701 env[1561]: time="2025-11-01T01:30:02.154467973Z" level=info msg="StopPodSandbox for \"e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f\"" Nov 1 01:30:02.198583 env[1561]: time="2025-11-01T01:30:02.198527030Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:30:02.198879 env[1561]: time="2025-11-01T01:30:02.198849009Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 01:30:02.199058 kubelet[2505]: E1101 01:30:02.199026 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:30:02.199105 kubelet[2505]: E1101 01:30:02.199068 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:30:02.199213 kubelet[2505]: E1101 01:30:02.199196 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-6xhxh_calico-system(9aac0066-097a-4582-8ce8-a3a1ddb41b3d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 01:30:02.199252 kubelet[2505]: E1101 01:30:02.199231 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6xhxh" podUID="9aac0066-097a-4582-8ce8-a3a1ddb41b3d" Nov 1 01:30:02.199326 env[1561]: time="2025-11-01T01:30:02.199314360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 01:30:02.204488 env[1561]: 2025-11-01 01:30:02.185 [INFO][4947] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f" Nov 1 01:30:02.204488 env[1561]: 2025-11-01 01:30:02.186 [INFO][4947] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f" iface="eth0" netns="/var/run/netns/cni-fd3441c9-fef1-cada-9e63-27701202c78c" Nov 1 01:30:02.204488 env[1561]: 2025-11-01 01:30:02.186 [INFO][4947] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f" iface="eth0" netns="/var/run/netns/cni-fd3441c9-fef1-cada-9e63-27701202c78c" Nov 1 01:30:02.204488 env[1561]: 2025-11-01 01:30:02.186 [INFO][4947] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f" iface="eth0" netns="/var/run/netns/cni-fd3441c9-fef1-cada-9e63-27701202c78c" Nov 1 01:30:02.204488 env[1561]: 2025-11-01 01:30:02.186 [INFO][4947] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f" Nov 1 01:30:02.204488 env[1561]: 2025-11-01 01:30:02.186 [INFO][4947] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f" Nov 1 01:30:02.204488 env[1561]: 2025-11-01 01:30:02.196 [INFO][4980] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f" HandleID="k8s-pod-network.e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f" Workload="ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--rhjss-eth0" Nov 1 01:30:02.204488 env[1561]: 2025-11-01 01:30:02.196 [INFO][4980] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:30:02.204488 env[1561]: 2025-11-01 01:30:02.196 [INFO][4980] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:30:02.204488 env[1561]: 2025-11-01 01:30:02.201 [WARNING][4980] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f" HandleID="k8s-pod-network.e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f" Workload="ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--rhjss-eth0" Nov 1 01:30:02.204488 env[1561]: 2025-11-01 01:30:02.201 [INFO][4980] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f" HandleID="k8s-pod-network.e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f" Workload="ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--rhjss-eth0" Nov 1 01:30:02.204488 env[1561]: 2025-11-01 01:30:02.202 [INFO][4980] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:30:02.204488 env[1561]: 2025-11-01 01:30:02.203 [INFO][4947] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f" Nov 1 01:30:02.204769 env[1561]: time="2025-11-01T01:30:02.204558981Z" level=info msg="TearDown network for sandbox \"e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f\" successfully" Nov 1 01:30:02.204769 env[1561]: time="2025-11-01T01:30:02.204578319Z" level=info msg="StopPodSandbox for \"e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f\" returns successfully" Nov 1 01:30:02.205654 env[1561]: time="2025-11-01T01:30:02.205639528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b6cfc8885-rhjss,Uid:438a7b01-7b7b-439d-a5c9-a6d4d681a41f,Namespace:calico-apiserver,Attempt:1,}" Nov 1 01:30:02.206111 systemd[1]: run-netns-cni\x2dfd3441c9\x2dfef1\x2dcada\x2d9e63\x2d27701202c78c.mount: Deactivated successfully. Nov 1 01:30:02.208796 env[1561]: 2025-11-01 01:30:02.186 [INFO][4946] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9" Nov 1 01:30:02.208796 env[1561]: 2025-11-01 01:30:02.186 [INFO][4946] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9" iface="eth0" netns="/var/run/netns/cni-27e840f0-c204-927a-6680-69cd677e9d44" Nov 1 01:30:02.208796 env[1561]: 2025-11-01 01:30:02.186 [INFO][4946] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9" iface="eth0" netns="/var/run/netns/cni-27e840f0-c204-927a-6680-69cd677e9d44" Nov 1 01:30:02.208796 env[1561]: 2025-11-01 01:30:02.186 [INFO][4946] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9" iface="eth0" netns="/var/run/netns/cni-27e840f0-c204-927a-6680-69cd677e9d44" Nov 1 01:30:02.208796 env[1561]: 2025-11-01 01:30:02.186 [INFO][4946] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9" Nov 1 01:30:02.208796 env[1561]: 2025-11-01 01:30:02.186 [INFO][4946] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9" Nov 1 01:30:02.208796 env[1561]: 2025-11-01 01:30:02.196 [INFO][4982] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9" HandleID="k8s-pod-network.4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9" Workload="ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--lqwd9-eth0" Nov 1 01:30:02.208796 env[1561]: 2025-11-01 01:30:02.196 [INFO][4982] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:30:02.208796 env[1561]: 2025-11-01 01:30:02.202 [INFO][4982] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:30:02.208796 env[1561]: 2025-11-01 01:30:02.206 [WARNING][4982] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9" HandleID="k8s-pod-network.4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9" Workload="ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--lqwd9-eth0" Nov 1 01:30:02.208796 env[1561]: 2025-11-01 01:30:02.206 [INFO][4982] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9" HandleID="k8s-pod-network.4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9" Workload="ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--lqwd9-eth0" Nov 1 01:30:02.208796 env[1561]: 2025-11-01 01:30:02.207 [INFO][4982] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:30:02.208796 env[1561]: 2025-11-01 01:30:02.208 [INFO][4946] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9" Nov 1 01:30:02.209054 env[1561]: time="2025-11-01T01:30:02.208838431Z" level=info msg="TearDown network for sandbox \"4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9\" successfully" Nov 1 01:30:02.209054 env[1561]: time="2025-11-01T01:30:02.208860626Z" level=info msg="StopPodSandbox for \"4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9\" returns successfully" Nov 1 01:30:02.209613 env[1561]: time="2025-11-01T01:30:02.209569170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b6cfc8885-lqwd9,Uid:cb08aa02-32db-4371-b5cc-c9a5a7fd22c8,Namespace:calico-apiserver,Attempt:1,}" Nov 1 01:30:02.210820 systemd[1]: run-netns-cni\x2d27e840f0\x2dc204\x2d927a\x2d6680\x2d69cd677e9d44.mount: Deactivated successfully. Nov 1 01:30:02.227482 systemd-networkd[1315]: cali141d6dd5341: Gained IPv6LL Nov 1 01:30:02.263311 systemd-networkd[1315]: cali0e36c40109f: Link UP Nov 1 01:30:02.290258 systemd-networkd[1315]: cali0e36c40109f: Gained carrier Nov 1 01:30:02.290423 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali0e36c40109f: link becomes ready Nov 1 01:30:02.296997 env[1561]: 2025-11-01 01:30:02.228 [INFO][5022] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--lqwd9-eth0 calico-apiserver-7b6cfc8885- calico-apiserver cb08aa02-32db-4371-b5cc-c9a5a7fd22c8 945 0 2025-11-01 01:29:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b6cfc8885 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.8-n-34cd8b9336 calico-apiserver-7b6cfc8885-lqwd9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0e36c40109f [] [] }} ContainerID="c328ddc571e0f38942770b2caad9a329e0a9278e27e86b612d1dea63cb650986" Namespace="calico-apiserver" Pod="calico-apiserver-7b6cfc8885-lqwd9" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--lqwd9-" Nov 1 01:30:02.296997 env[1561]: 2025-11-01 01:30:02.228 [INFO][5022] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c328ddc571e0f38942770b2caad9a329e0a9278e27e86b612d1dea63cb650986" Namespace="calico-apiserver" Pod="calico-apiserver-7b6cfc8885-lqwd9" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--lqwd9-eth0" Nov 1 01:30:02.296997 env[1561]: 2025-11-01 01:30:02.239 [INFO][5060] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c328ddc571e0f38942770b2caad9a329e0a9278e27e86b612d1dea63cb650986" HandleID="k8s-pod-network.c328ddc571e0f38942770b2caad9a329e0a9278e27e86b612d1dea63cb650986" Workload="ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--lqwd9-eth0" Nov 1 01:30:02.296997 env[1561]: 2025-11-01 01:30:02.239 [INFO][5060] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c328ddc571e0f38942770b2caad9a329e0a9278e27e86b612d1dea63cb650986" HandleID="k8s-pod-network.c328ddc571e0f38942770b2caad9a329e0a9278e27e86b612d1dea63cb650986" Workload="ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--lqwd9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f700), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.8-n-34cd8b9336", "pod":"calico-apiserver-7b6cfc8885-lqwd9", "timestamp":"2025-11-01 01:30:02.239739722 +0000 UTC"}, Hostname:"ci-3510.3.8-n-34cd8b9336", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:30:02.296997 env[1561]: 2025-11-01 01:30:02.239 [INFO][5060] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:30:02.296997 env[1561]: 2025-11-01 01:30:02.239 [INFO][5060] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:30:02.296997 env[1561]: 2025-11-01 01:30:02.239 [INFO][5060] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-34cd8b9336' Nov 1 01:30:02.296997 env[1561]: 2025-11-01 01:30:02.244 [INFO][5060] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c328ddc571e0f38942770b2caad9a329e0a9278e27e86b612d1dea63cb650986" host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:02.296997 env[1561]: 2025-11-01 01:30:02.248 [INFO][5060] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:02.296997 env[1561]: 2025-11-01 01:30:02.251 [INFO][5060] ipam/ipam.go 511: Trying affinity for 192.168.114.0/26 host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:02.296997 env[1561]: 2025-11-01 01:30:02.252 [INFO][5060] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.0/26 host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:02.296997 env[1561]: 2025-11-01 01:30:02.254 [INFO][5060] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:02.296997 env[1561]: 2025-11-01 01:30:02.254 [INFO][5060] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.c328ddc571e0f38942770b2caad9a329e0a9278e27e86b612d1dea63cb650986" host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:02.296997 env[1561]: 2025-11-01 01:30:02.255 [INFO][5060] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c328ddc571e0f38942770b2caad9a329e0a9278e27e86b612d1dea63cb650986 Nov 1 01:30:02.296997 env[1561]: 2025-11-01 01:30:02.257 [INFO][5060] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.c328ddc571e0f38942770b2caad9a329e0a9278e27e86b612d1dea63cb650986" host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:02.296997 env[1561]: 2025-11-01 01:30:02.261 [INFO][5060] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.5/26] block=192.168.114.0/26 handle="k8s-pod-network.c328ddc571e0f38942770b2caad9a329e0a9278e27e86b612d1dea63cb650986" host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:02.296997 env[1561]: 2025-11-01 01:30:02.261 [INFO][5060] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.5/26] handle="k8s-pod-network.c328ddc571e0f38942770b2caad9a329e0a9278e27e86b612d1dea63cb650986" host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:02.296997 env[1561]: 2025-11-01 01:30:02.261 [INFO][5060] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:30:02.296997 env[1561]: 2025-11-01 01:30:02.261 [INFO][5060] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.5/26] IPv6=[] ContainerID="c328ddc571e0f38942770b2caad9a329e0a9278e27e86b612d1dea63cb650986" HandleID="k8s-pod-network.c328ddc571e0f38942770b2caad9a329e0a9278e27e86b612d1dea63cb650986" Workload="ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--lqwd9-eth0" Nov 1 01:30:02.297672 env[1561]: 2025-11-01 01:30:02.262 [INFO][5022] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c328ddc571e0f38942770b2caad9a329e0a9278e27e86b612d1dea63cb650986" Namespace="calico-apiserver" Pod="calico-apiserver-7b6cfc8885-lqwd9" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--lqwd9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--lqwd9-eth0", GenerateName:"calico-apiserver-7b6cfc8885-", Namespace:"calico-apiserver", SelfLink:"", UID:"cb08aa02-32db-4371-b5cc-c9a5a7fd22c8", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 29, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b6cfc8885", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34cd8b9336", ContainerID:"", Pod:"calico-apiserver-7b6cfc8885-lqwd9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0e36c40109f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:30:02.297672 env[1561]: 2025-11-01 01:30:02.262 [INFO][5022] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.5/32] ContainerID="c328ddc571e0f38942770b2caad9a329e0a9278e27e86b612d1dea63cb650986" Namespace="calico-apiserver" Pod="calico-apiserver-7b6cfc8885-lqwd9" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--lqwd9-eth0" Nov 1 01:30:02.297672 env[1561]: 2025-11-01 01:30:02.262 [INFO][5022] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0e36c40109f ContainerID="c328ddc571e0f38942770b2caad9a329e0a9278e27e86b612d1dea63cb650986" Namespace="calico-apiserver" Pod="calico-apiserver-7b6cfc8885-lqwd9" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--lqwd9-eth0" Nov 1 01:30:02.297672 env[1561]: 2025-11-01 01:30:02.290 [INFO][5022] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c328ddc571e0f38942770b2caad9a329e0a9278e27e86b612d1dea63cb650986" Namespace="calico-apiserver" Pod="calico-apiserver-7b6cfc8885-lqwd9" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--lqwd9-eth0" Nov 1 01:30:02.297672 env[1561]: 2025-11-01 01:30:02.290 [INFO][5022] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c328ddc571e0f38942770b2caad9a329e0a9278e27e86b612d1dea63cb650986" Namespace="calico-apiserver" Pod="calico-apiserver-7b6cfc8885-lqwd9" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--lqwd9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--lqwd9-eth0", GenerateName:"calico-apiserver-7b6cfc8885-", Namespace:"calico-apiserver", SelfLink:"", UID:"cb08aa02-32db-4371-b5cc-c9a5a7fd22c8", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 29, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b6cfc8885", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34cd8b9336", ContainerID:"c328ddc571e0f38942770b2caad9a329e0a9278e27e86b612d1dea63cb650986", Pod:"calico-apiserver-7b6cfc8885-lqwd9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0e36c40109f", MAC:"96:cf:9d:ef:28:e1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:30:02.297672 env[1561]: 2025-11-01 01:30:02.296 [INFO][5022] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c328ddc571e0f38942770b2caad9a329e0a9278e27e86b612d1dea63cb650986" Namespace="calico-apiserver" Pod="calico-apiserver-7b6cfc8885-lqwd9" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--lqwd9-eth0" Nov 1 01:30:02.301405 env[1561]: time="2025-11-01T01:30:02.301362380Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:30:02.301405 env[1561]: time="2025-11-01T01:30:02.301385151Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:30:02.301487 env[1561]: time="2025-11-01T01:30:02.301394888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:30:02.301550 env[1561]: time="2025-11-01T01:30:02.301531349Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c328ddc571e0f38942770b2caad9a329e0a9278e27e86b612d1dea63cb650986 pid=5107 runtime=io.containerd.runc.v2 Nov 1 01:30:02.302000 audit[5117]: NETFILTER_CFG table=filter:110 family=2 entries=58 op=nft_register_chain pid=5117 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 01:30:02.307491 systemd[1]: Started cri-containerd-c328ddc571e0f38942770b2caad9a329e0a9278e27e86b612d1dea63cb650986.scope. Nov 1 01:30:02.309271 kubelet[2505]: E1101 01:30:02.309252 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bff9f9fd4-4vmq2" podUID="593908a5-f718-4b03-b095-540ff204a4bd" Nov 1 01:30:02.309362 kubelet[2505]: E1101 01:30:02.309352 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6xhxh" podUID="9aac0066-097a-4582-8ce8-a3a1ddb41b3d" Nov 1 01:30:02.328110 kernel: kauditd_printk_skb: 833 callbacks suppressed Nov 1 01:30:02.328181 kernel: audit: type=1325 audit(1761960602.302:1197): table=filter:110 family=2 entries=58 op=nft_register_chain pid=5117 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 01:30:02.365217 systemd-networkd[1315]: cali7a7f4d59257: Link UP Nov 1 01:30:02.302000 audit[5117]: SYSCALL arch=c000003e syscall=46 success=yes exit=30568 a0=3 a1=7ffe6bd479c0 a2=0 a3=7ffe6bd479ac items=0 ppid=4216 pid=5117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:02.383405 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 01:30:02.383436 kernel: audit: type=1300 audit(1761960602.302:1197): arch=c000003e syscall=46 success=yes exit=30568 a0=3 a1=7ffe6bd479c0 a2=0 a3=7ffe6bd479ac items=0 ppid=4216 pid=5117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:02.497512 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali7a7f4d59257: link becomes ready Nov 1 01:30:02.497565 kernel: audit: type=1327 audit(1761960602.302:1197): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 01:30:02.302000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 01:30:02.523940 systemd-networkd[1315]: cali7a7f4d59257: Gained carrier Nov 1 01:30:02.389000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.580403 kernel: audit: type=1400 audit(1761960602.389:1198): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.582453 env[1561]: 2025-11-01 01:30:02.227 [INFO][5011] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--rhjss-eth0 calico-apiserver-7b6cfc8885- calico-apiserver 438a7b01-7b7b-439d-a5c9-a6d4d681a41f 944 0 2025-11-01 01:29:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b6cfc8885 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.8-n-34cd8b9336 calico-apiserver-7b6cfc8885-rhjss eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7a7f4d59257 [] [] }} ContainerID="d5277884564c3c093fb6fdc69e866326bf49a9a2a0e59252cd541875fd6e36bd" Namespace="calico-apiserver" Pod="calico-apiserver-7b6cfc8885-rhjss" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--rhjss-" Nov 1 01:30:02.582453 env[1561]: 2025-11-01 01:30:02.227 [INFO][5011] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d5277884564c3c093fb6fdc69e866326bf49a9a2a0e59252cd541875fd6e36bd" Namespace="calico-apiserver" Pod="calico-apiserver-7b6cfc8885-rhjss" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--rhjss-eth0" Nov 1 01:30:02.582453 env[1561]: 2025-11-01 01:30:02.239 [INFO][5058] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d5277884564c3c093fb6fdc69e866326bf49a9a2a0e59252cd541875fd6e36bd" HandleID="k8s-pod-network.d5277884564c3c093fb6fdc69e866326bf49a9a2a0e59252cd541875fd6e36bd" Workload="ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--rhjss-eth0" Nov 1 01:30:02.582453 env[1561]: 2025-11-01 01:30:02.239 [INFO][5058] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d5277884564c3c093fb6fdc69e866326bf49a9a2a0e59252cd541875fd6e36bd" HandleID="k8s-pod-network.d5277884564c3c093fb6fdc69e866326bf49a9a2a0e59252cd541875fd6e36bd" Workload="ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--rhjss-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000363c20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.8-n-34cd8b9336", "pod":"calico-apiserver-7b6cfc8885-rhjss", "timestamp":"2025-11-01 01:30:02.239732901 +0000 UTC"}, Hostname:"ci-3510.3.8-n-34cd8b9336", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:30:02.582453 env[1561]: 2025-11-01 01:30:02.239 [INFO][5058] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:30:02.582453 env[1561]: 2025-11-01 01:30:02.261 [INFO][5058] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:30:02.582453 env[1561]: 2025-11-01 01:30:02.261 [INFO][5058] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-34cd8b9336' Nov 1 01:30:02.582453 env[1561]: 2025-11-01 01:30:02.346 [INFO][5058] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d5277884564c3c093fb6fdc69e866326bf49a9a2a0e59252cd541875fd6e36bd" host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:02.582453 env[1561]: 2025-11-01 01:30:02.349 [INFO][5058] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:02.582453 env[1561]: 2025-11-01 01:30:02.352 [INFO][5058] ipam/ipam.go 511: Trying affinity for 192.168.114.0/26 host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:02.582453 env[1561]: 2025-11-01 01:30:02.354 [INFO][5058] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.0/26 host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:02.582453 env[1561]: 2025-11-01 01:30:02.355 [INFO][5058] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:02.582453 env[1561]: 2025-11-01 01:30:02.355 [INFO][5058] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.d5277884564c3c093fb6fdc69e866326bf49a9a2a0e59252cd541875fd6e36bd" host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:02.582453 env[1561]: 2025-11-01 01:30:02.356 [INFO][5058] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d5277884564c3c093fb6fdc69e866326bf49a9a2a0e59252cd541875fd6e36bd Nov 1 01:30:02.582453 env[1561]: 2025-11-01 01:30:02.359 [INFO][5058] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.d5277884564c3c093fb6fdc69e866326bf49a9a2a0e59252cd541875fd6e36bd" host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:02.582453 env[1561]: 2025-11-01 01:30:02.362 [INFO][5058] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.6/26] block=192.168.114.0/26 handle="k8s-pod-network.d5277884564c3c093fb6fdc69e866326bf49a9a2a0e59252cd541875fd6e36bd" host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:02.582453 env[1561]: 2025-11-01 01:30:02.362 [INFO][5058] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.6/26] handle="k8s-pod-network.d5277884564c3c093fb6fdc69e866326bf49a9a2a0e59252cd541875fd6e36bd" host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:02.582453 env[1561]: 2025-11-01 01:30:02.362 [INFO][5058] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:30:02.582453 env[1561]: 2025-11-01 01:30:02.362 [INFO][5058] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.6/26] IPv6=[] ContainerID="d5277884564c3c093fb6fdc69e866326bf49a9a2a0e59252cd541875fd6e36bd" HandleID="k8s-pod-network.d5277884564c3c093fb6fdc69e866326bf49a9a2a0e59252cd541875fd6e36bd" Workload="ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--rhjss-eth0" Nov 1 01:30:02.582996 env[1561]: 2025-11-01 01:30:02.363 [INFO][5011] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d5277884564c3c093fb6fdc69e866326bf49a9a2a0e59252cd541875fd6e36bd" Namespace="calico-apiserver" Pod="calico-apiserver-7b6cfc8885-rhjss" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--rhjss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--rhjss-eth0", GenerateName:"calico-apiserver-7b6cfc8885-", Namespace:"calico-apiserver", SelfLink:"", UID:"438a7b01-7b7b-439d-a5c9-a6d4d681a41f", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 29, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b6cfc8885", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34cd8b9336", ContainerID:"", Pod:"calico-apiserver-7b6cfc8885-rhjss", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7a7f4d59257", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:30:02.582996 env[1561]: 2025-11-01 01:30:02.364 [INFO][5011] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.6/32] ContainerID="d5277884564c3c093fb6fdc69e866326bf49a9a2a0e59252cd541875fd6e36bd" Namespace="calico-apiserver" Pod="calico-apiserver-7b6cfc8885-rhjss" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--rhjss-eth0" Nov 1 01:30:02.582996 env[1561]: 2025-11-01 01:30:02.364 [INFO][5011] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7a7f4d59257 ContainerID="d5277884564c3c093fb6fdc69e866326bf49a9a2a0e59252cd541875fd6e36bd" Namespace="calico-apiserver" Pod="calico-apiserver-7b6cfc8885-rhjss" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--rhjss-eth0" Nov 1 01:30:02.582996 env[1561]: 2025-11-01 01:30:02.523 [INFO][5011] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d5277884564c3c093fb6fdc69e866326bf49a9a2a0e59252cd541875fd6e36bd" Namespace="calico-apiserver" Pod="calico-apiserver-7b6cfc8885-rhjss" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--rhjss-eth0" Nov 1 01:30:02.582996 env[1561]: 2025-11-01 01:30:02.524 [INFO][5011] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d5277884564c3c093fb6fdc69e866326bf49a9a2a0e59252cd541875fd6e36bd" Namespace="calico-apiserver" Pod="calico-apiserver-7b6cfc8885-rhjss" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--rhjss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--rhjss-eth0", GenerateName:"calico-apiserver-7b6cfc8885-", Namespace:"calico-apiserver", SelfLink:"", UID:"438a7b01-7b7b-439d-a5c9-a6d4d681a41f", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 29, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b6cfc8885", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34cd8b9336", ContainerID:"d5277884564c3c093fb6fdc69e866326bf49a9a2a0e59252cd541875fd6e36bd", Pod:"calico-apiserver-7b6cfc8885-rhjss", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7a7f4d59257", MAC:"d2:10:c7:d5:51:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:30:02.582996 env[1561]: 2025-11-01 01:30:02.580 [INFO][5011] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d5277884564c3c093fb6fdc69e866326bf49a9a2a0e59252cd541875fd6e36bd" Namespace="calico-apiserver" Pod="calico-apiserver-7b6cfc8885-rhjss" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--rhjss-eth0" Nov 1 01:30:02.587588 env[1561]: time="2025-11-01T01:30:02.587520782Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:30:02.587588 env[1561]: time="2025-11-01T01:30:02.587543528Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:30:02.587588 env[1561]: time="2025-11-01T01:30:02.587550746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:30:02.587714 env[1561]: time="2025-11-01T01:30:02.587619980Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d5277884564c3c093fb6fdc69e866326bf49a9a2a0e59252cd541875fd6e36bd pid=5151 runtime=io.containerd.runc.v2 Nov 1 01:30:02.593276 systemd[1]: Started cri-containerd-d5277884564c3c093fb6fdc69e866326bf49a9a2a0e59252cd541875fd6e36bd.scope. Nov 1 01:30:02.610531 systemd-networkd[1315]: calib3833dab0fe: Gained IPv6LL Nov 1 01:30:02.389000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.639445 kernel: audit: type=1400 audit(1761960602.389:1199): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.389000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.736989 env[1561]: time="2025-11-01T01:30:02.736929948Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:30:02.737558 env[1561]: time="2025-11-01T01:30:02.737478682Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 01:30:02.737738 kubelet[2505]: E1101 01:30:02.737688 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:30:02.737738 kubelet[2505]: E1101 01:30:02.737717 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:30:02.737809 kubelet[2505]: E1101 01:30:02.737762 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-9wz7k_calico-system(79df0ba2-6e86-422c-8f93-652dfb942b69): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 01:30:02.737809 kubelet[2505]: E1101 01:30:02.737785 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9wz7k" podUID="79df0ba2-6e86-422c-8f93-652dfb942b69" Nov 1 01:30:02.755833 kernel: audit: type=1400 audit(1761960602.389:1200): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.755865 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Nov 1 01:30:02.755886 kernel: audit: type=1400 audit(1761960602.389:1201): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.389000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.780402 kernel: audit: audit_lost=2 audit_rate_limit=0 audit_backlog_limit=64 Nov 1 01:30:02.780433 kernel: audit: backlog limit exceeded Nov 1 01:30:02.389000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.389000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.389000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.389000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.389000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.496000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.496000 audit: BPF prog-id=180 op=LOAD Nov 1 01:30:02.496000 audit[5116]: AVC avc: denied { bpf } for pid=5116 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.496000 audit[5116]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=5107 pid=5116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:02.496000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333323864646335373165306633383934323737306232636161643961 Nov 1 01:30:02.496000 audit[5116]: AVC avc: denied { perfmon } for pid=5116 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.496000 audit[5116]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=5107 pid=5116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:02.496000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333323864646335373165306633383934323737306232636161643961 Nov 1 01:30:02.496000 audit[5116]: AVC avc: denied { bpf } for pid=5116 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.496000 audit[5116]: AVC avc: denied { bpf } for pid=5116 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.496000 audit[5116]: AVC avc: denied { bpf } for pid=5116 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.496000 audit[5116]: AVC avc: denied { perfmon } for pid=5116 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.496000 audit[5116]: AVC avc: denied { perfmon } for pid=5116 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.496000 audit[5116]: AVC avc: denied { perfmon } for pid=5116 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.496000 audit[5116]: AVC avc: denied { perfmon } for pid=5116 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.496000 audit[5116]: AVC avc: denied { perfmon } for pid=5116 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.496000 audit[5116]: AVC avc: denied { bpf } for pid=5116 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.526000 audit[5136]: NETFILTER_CFG table=filter:111 family=2 entries=20 op=nft_register_rule pid=5136 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:30:02.526000 audit[5136]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc19ef2840 a2=0 a3=7ffc19ef282c items=0 ppid=2709 pid=5136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:02.526000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:30:02.496000 audit[5116]: AVC avc: denied { bpf } for pid=5116 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.496000 audit: BPF prog-id=181 op=LOAD Nov 1 01:30:02.496000 audit[5116]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c00029fb10 items=0 ppid=5107 pid=5116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:02.496000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333323864646335373165306633383934323737306232636161643961 Nov 1 01:30:02.578000 audit[5116]: AVC avc: denied { bpf } for pid=5116 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.578000 audit[5116]: AVC avc: denied { bpf } for pid=5116 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.578000 audit[5116]: AVC avc: denied { perfmon } for pid=5116 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.578000 audit[5116]: AVC avc: denied { perfmon } for pid=5116 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.578000 audit[5116]: AVC avc: denied { perfmon } for pid=5116 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.578000 audit[5116]: AVC avc: denied { perfmon } for pid=5116 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.578000 audit[5116]: AVC avc: denied { perfmon } for pid=5116 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.578000 audit[5116]: AVC avc: denied { bpf } for pid=5116 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.645000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.645000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.645000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.645000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.645000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.645000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.645000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.645000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.645000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.578000 audit[5116]: AVC avc: denied { bpf } for pid=5116 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.578000 audit: BPF prog-id=182 op=LOAD Nov 1 01:30:02.578000 audit[5116]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c00029fb58 items=0 ppid=5107 pid=5116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:02.578000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333323864646335373165306633383934323737306232636161643961 Nov 1 01:30:02.695000 audit: BPF prog-id=182 op=UNLOAD Nov 1 01:30:02.695000 audit: BPF prog-id=181 op=UNLOAD Nov 1 01:30:02.695000 audit[5116]: AVC avc: denied { bpf } for pid=5116 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.695000 audit[5116]: AVC avc: denied { bpf } for pid=5116 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.695000 audit[5116]: AVC avc: denied { bpf } for pid=5116 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.695000 audit[5116]: AVC avc: denied { perfmon } for pid=5116 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.695000 audit[5116]: AVC avc: denied { perfmon } for pid=5116 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.695000 audit[5116]: AVC avc: denied { perfmon } for pid=5116 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.695000 audit[5116]: AVC avc: denied { perfmon } for pid=5116 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.695000 audit[5116]: AVC avc: denied { bpf } for pid=5116 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.778000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.886000 audit[5161]: AVC avc: denied { bpf } for pid=5161 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.886000 audit[5161]: AVC avc: denied { bpf } for pid=5161 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.886000 audit[5161]: AVC avc: denied { perfmon } for pid=5161 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.886000 audit[5161]: AVC avc: denied { perfmon } for pid=5161 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.886000 audit[5161]: AVC avc: denied { perfmon } for pid=5161 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.886000 audit[5161]: AVC avc: denied { perfmon } for pid=5161 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.886000 audit[5161]: AVC avc: denied { perfmon } for pid=5161 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.886000 audit[5161]: AVC avc: denied { bpf } for pid=5161 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.695000 audit[5116]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c000388208 items=0 ppid=5107 pid=5116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:02.695000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333323864646335373165306633383934323737306232636161643961 Nov 1 01:30:02.886000 audit[5161]: AVC avc: denied { bpf } for pid=5161 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.886000 audit: BPF prog-id=186 op=LOAD Nov 1 01:30:02.886000 audit[5161]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000145770 a2=78 a3=c000380a08 items=0 ppid=5151 pid=5161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:02.886000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435323737383834353634633363303933666236666463363965383636 Nov 1 01:30:02.886000 audit: BPF prog-id=186 op=UNLOAD Nov 1 01:30:02.886000 audit: BPF prog-id=184 op=UNLOAD Nov 1 01:30:02.886000 audit[5161]: AVC avc: denied { bpf } for pid=5161 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.886000 audit[5161]: AVC avc: denied { bpf } for pid=5161 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.886000 audit[5161]: AVC avc: denied { bpf } for pid=5161 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.886000 audit[5161]: AVC avc: denied { perfmon } for pid=5161 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.886000 audit[5161]: AVC avc: denied { perfmon } for pid=5161 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.886000 audit[5161]: AVC avc: denied { perfmon } for pid=5161 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.886000 audit[5161]: AVC avc: denied { perfmon } for pid=5161 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.886000 audit[5161]: AVC avc: denied { perfmon } for pid=5161 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.886000 audit[5161]: AVC avc: denied { bpf } for pid=5161 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.886000 audit[5161]: AVC avc: denied { bpf } for pid=5161 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:02.886000 audit: BPF prog-id=187 op=LOAD Nov 1 01:30:02.886000 audit[5161]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000145c30 a2=78 a3=c000380e18 items=0 ppid=5151 pid=5161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:02.886000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435323737383834353634633363303933666236666463363965383636 Nov 1 01:30:02.888000 audit[5136]: NETFILTER_CFG table=nat:112 family=2 entries=14 op=nft_register_rule pid=5136 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:30:02.888000 audit[5136]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc19ef2840 a2=0 a3=0 items=0 ppid=2709 pid=5136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:02.888000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:30:02.903435 env[1561]: time="2025-11-01T01:30:02.903401087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b6cfc8885-rhjss,Uid:438a7b01-7b7b-439d-a5c9-a6d4d681a41f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d5277884564c3c093fb6fdc69e866326bf49a9a2a0e59252cd541875fd6e36bd\"" Nov 1 01:30:02.904015 env[1561]: time="2025-11-01T01:30:02.903996877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b6cfc8885-lqwd9,Uid:cb08aa02-32db-4371-b5cc-c9a5a7fd22c8,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c328ddc571e0f38942770b2caad9a329e0a9278e27e86b612d1dea63cb650986\"" Nov 1 01:30:02.904265 env[1561]: time="2025-11-01T01:30:02.904248644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:30:02.904000 audit[5190]: NETFILTER_CFG table=filter:113 family=2 entries=49 op=nft_register_chain pid=5190 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 01:30:02.904000 audit[5190]: SYSCALL arch=c000003e syscall=46 success=yes exit=25436 a0=3 a1=7ffca68586d0 a2=0 a3=7ffca68586bc items=0 ppid=4216 pid=5190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:02.904000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 01:30:02.994647 systemd-networkd[1315]: cali35a2f823f05: Gained IPv6LL Nov 1 01:30:03.153216 env[1561]: time="2025-11-01T01:30:03.153141481Z" level=info msg="StopPodSandbox for \"43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e\"" Nov 1 01:30:03.204015 env[1561]: 2025-11-01 01:30:03.176 [INFO][5201] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e" Nov 1 01:30:03.204015 env[1561]: 2025-11-01 01:30:03.176 [INFO][5201] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e" iface="eth0" netns="/var/run/netns/cni-89cd963e-7a63-e0dc-a249-fe6202e40f49" Nov 1 01:30:03.204015 env[1561]: 2025-11-01 01:30:03.176 [INFO][5201] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e" iface="eth0" netns="/var/run/netns/cni-89cd963e-7a63-e0dc-a249-fe6202e40f49" Nov 1 01:30:03.204015 env[1561]: 2025-11-01 01:30:03.177 [INFO][5201] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e" iface="eth0" netns="/var/run/netns/cni-89cd963e-7a63-e0dc-a249-fe6202e40f49" Nov 1 01:30:03.204015 env[1561]: 2025-11-01 01:30:03.177 [INFO][5201] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e" Nov 1 01:30:03.204015 env[1561]: 2025-11-01 01:30:03.177 [INFO][5201] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e" Nov 1 01:30:03.204015 env[1561]: 2025-11-01 01:30:03.186 [INFO][5217] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e" HandleID="k8s-pod-network.43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e" Workload="ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--54j2g-eth0" Nov 1 01:30:03.204015 env[1561]: 2025-11-01 01:30:03.186 [INFO][5217] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:30:03.204015 env[1561]: 2025-11-01 01:30:03.186 [INFO][5217] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:30:03.204015 env[1561]: 2025-11-01 01:30:03.200 [WARNING][5217] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e" HandleID="k8s-pod-network.43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e" Workload="ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--54j2g-eth0" Nov 1 01:30:03.204015 env[1561]: 2025-11-01 01:30:03.200 [INFO][5217] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e" HandleID="k8s-pod-network.43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e" Workload="ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--54j2g-eth0" Nov 1 01:30:03.204015 env[1561]: 2025-11-01 01:30:03.202 [INFO][5217] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:30:03.204015 env[1561]: 2025-11-01 01:30:03.203 [INFO][5201] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e" Nov 1 01:30:03.204319 env[1561]: time="2025-11-01T01:30:03.204057250Z" level=info msg="TearDown network for sandbox \"43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e\" successfully" Nov 1 01:30:03.204319 env[1561]: time="2025-11-01T01:30:03.204079213Z" level=info msg="StopPodSandbox for \"43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e\" returns successfully" Nov 1 01:30:03.205441 env[1561]: time="2025-11-01T01:30:03.205427169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-54j2g,Uid:c8ca38b0-d8b7-4714-8abe-3e911e8eec29,Namespace:kube-system,Attempt:1,}" Nov 1 01:30:03.206919 systemd[1]: run-netns-cni\x2d89cd963e\x2d7a63\x2de0dc\x2da249\x2dfe6202e40f49.mount: Deactivated successfully. Nov 1 01:30:03.263492 systemd-networkd[1315]: calieb77a8bad17: Link UP Nov 1 01:30:03.269809 env[1561]: time="2025-11-01T01:30:03.269752396Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:30:03.270140 env[1561]: time="2025-11-01T01:30:03.270114766Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:30:03.270317 kubelet[2505]: E1101 01:30:03.270296 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:30:03.270471 kubelet[2505]: E1101 01:30:03.270326 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:30:03.270497 kubelet[2505]: E1101 01:30:03.270469 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7b6cfc8885-rhjss_calico-apiserver(438a7b01-7b7b-439d-a5c9-a6d4d681a41f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:30:03.270529 env[1561]: time="2025-11-01T01:30:03.270513078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:30:03.270554 kubelet[2505]: E1101 01:30:03.270510 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-rhjss" podUID="438a7b01-7b7b-439d-a5c9-a6d4d681a41f" Nov 1 01:30:03.289404 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calieb77a8bad17: link becomes ready Nov 1 01:30:03.289466 systemd-networkd[1315]: calieb77a8bad17: Gained carrier Nov 1 01:30:03.295773 env[1561]: 2025-11-01 01:30:03.227 [INFO][5239] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--54j2g-eth0 coredns-66bc5c9577- kube-system c8ca38b0-d8b7-4714-8abe-3e911e8eec29 970 0 2025-11-01 01:29:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.8-n-34cd8b9336 coredns-66bc5c9577-54j2g eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calieb77a8bad17 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="26d88e78bdb7dde573cd726943ecdf02363a83ec24b13e8889ef3e388368c725" Namespace="kube-system" Pod="coredns-66bc5c9577-54j2g" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--54j2g-" Nov 1 01:30:03.295773 env[1561]: 2025-11-01 01:30:03.227 [INFO][5239] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="26d88e78bdb7dde573cd726943ecdf02363a83ec24b13e8889ef3e388368c725" Namespace="kube-system" Pod="coredns-66bc5c9577-54j2g" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--54j2g-eth0" Nov 1 01:30:03.295773 env[1561]: 2025-11-01 01:30:03.239 [INFO][5262] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="26d88e78bdb7dde573cd726943ecdf02363a83ec24b13e8889ef3e388368c725" HandleID="k8s-pod-network.26d88e78bdb7dde573cd726943ecdf02363a83ec24b13e8889ef3e388368c725" Workload="ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--54j2g-eth0" Nov 1 01:30:03.295773 env[1561]: 2025-11-01 01:30:03.239 [INFO][5262] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="26d88e78bdb7dde573cd726943ecdf02363a83ec24b13e8889ef3e388368c725" HandleID="k8s-pod-network.26d88e78bdb7dde573cd726943ecdf02363a83ec24b13e8889ef3e388368c725" Workload="ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--54j2g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f740), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.8-n-34cd8b9336", "pod":"coredns-66bc5c9577-54j2g", "timestamp":"2025-11-01 01:30:03.239672505 +0000 UTC"}, Hostname:"ci-3510.3.8-n-34cd8b9336", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:30:03.295773 env[1561]: 2025-11-01 01:30:03.240 [INFO][5262] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:30:03.295773 env[1561]: 2025-11-01 01:30:03.240 [INFO][5262] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:30:03.295773 env[1561]: 2025-11-01 01:30:03.240 [INFO][5262] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-34cd8b9336' Nov 1 01:30:03.295773 env[1561]: 2025-11-01 01:30:03.244 [INFO][5262] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.26d88e78bdb7dde573cd726943ecdf02363a83ec24b13e8889ef3e388368c725" host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:03.295773 env[1561]: 2025-11-01 01:30:03.247 [INFO][5262] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:03.295773 env[1561]: 2025-11-01 01:30:03.251 [INFO][5262] ipam/ipam.go 511: Trying affinity for 192.168.114.0/26 host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:03.295773 env[1561]: 2025-11-01 01:30:03.252 [INFO][5262] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.0/26 host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:03.295773 env[1561]: 2025-11-01 01:30:03.254 [INFO][5262] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:03.295773 env[1561]: 2025-11-01 01:30:03.254 [INFO][5262] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.26d88e78bdb7dde573cd726943ecdf02363a83ec24b13e8889ef3e388368c725" host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:03.295773 env[1561]: 2025-11-01 01:30:03.255 [INFO][5262] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.26d88e78bdb7dde573cd726943ecdf02363a83ec24b13e8889ef3e388368c725 Nov 1 01:30:03.295773 env[1561]: 2025-11-01 01:30:03.258 [INFO][5262] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.26d88e78bdb7dde573cd726943ecdf02363a83ec24b13e8889ef3e388368c725" host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:03.295773 env[1561]: 2025-11-01 01:30:03.261 [INFO][5262] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.7/26] block=192.168.114.0/26 handle="k8s-pod-network.26d88e78bdb7dde573cd726943ecdf02363a83ec24b13e8889ef3e388368c725" host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:03.295773 env[1561]: 2025-11-01 01:30:03.261 [INFO][5262] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.7/26] handle="k8s-pod-network.26d88e78bdb7dde573cd726943ecdf02363a83ec24b13e8889ef3e388368c725" host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:03.295773 env[1561]: 2025-11-01 01:30:03.261 [INFO][5262] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:30:03.295773 env[1561]: 2025-11-01 01:30:03.261 [INFO][5262] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.7/26] IPv6=[] ContainerID="26d88e78bdb7dde573cd726943ecdf02363a83ec24b13e8889ef3e388368c725" HandleID="k8s-pod-network.26d88e78bdb7dde573cd726943ecdf02363a83ec24b13e8889ef3e388368c725" Workload="ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--54j2g-eth0" Nov 1 01:30:03.296473 env[1561]: 2025-11-01 01:30:03.262 [INFO][5239] cni-plugin/k8s.go 418: Populated endpoint ContainerID="26d88e78bdb7dde573cd726943ecdf02363a83ec24b13e8889ef3e388368c725" Namespace="kube-system" Pod="coredns-66bc5c9577-54j2g" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--54j2g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--54j2g-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"c8ca38b0-d8b7-4714-8abe-3e911e8eec29", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 29, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34cd8b9336", ContainerID:"", Pod:"coredns-66bc5c9577-54j2g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieb77a8bad17", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:30:03.296473 env[1561]: 2025-11-01 01:30:03.262 [INFO][5239] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.7/32] ContainerID="26d88e78bdb7dde573cd726943ecdf02363a83ec24b13e8889ef3e388368c725" Namespace="kube-system" Pod="coredns-66bc5c9577-54j2g" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--54j2g-eth0" Nov 1 01:30:03.296473 env[1561]: 2025-11-01 01:30:03.262 [INFO][5239] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieb77a8bad17 ContainerID="26d88e78bdb7dde573cd726943ecdf02363a83ec24b13e8889ef3e388368c725" Namespace="kube-system" Pod="coredns-66bc5c9577-54j2g" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--54j2g-eth0" Nov 1 01:30:03.296473 env[1561]: 2025-11-01 01:30:03.289 [INFO][5239] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="26d88e78bdb7dde573cd726943ecdf02363a83ec24b13e8889ef3e388368c725" Namespace="kube-system" Pod="coredns-66bc5c9577-54j2g" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--54j2g-eth0" Nov 1 01:30:03.296473 env[1561]: 2025-11-01 01:30:03.289 [INFO][5239] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="26d88e78bdb7dde573cd726943ecdf02363a83ec24b13e8889ef3e388368c725" Namespace="kube-system" Pod="coredns-66bc5c9577-54j2g" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--54j2g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--54j2g-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"c8ca38b0-d8b7-4714-8abe-3e911e8eec29", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 29, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34cd8b9336", ContainerID:"26d88e78bdb7dde573cd726943ecdf02363a83ec24b13e8889ef3e388368c725", Pod:"coredns-66bc5c9577-54j2g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieb77a8bad17", MAC:"9e:41:24:e2:48:7a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:30:03.296679 env[1561]: 2025-11-01 01:30:03.294 [INFO][5239] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="26d88e78bdb7dde573cd726943ecdf02363a83ec24b13e8889ef3e388368c725" Namespace="kube-system" Pod="coredns-66bc5c9577-54j2g" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--54j2g-eth0" Nov 1 01:30:03.300334 env[1561]: time="2025-11-01T01:30:03.300297611Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:30:03.300334 env[1561]: time="2025-11-01T01:30:03.300321838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:30:03.300334 env[1561]: time="2025-11-01T01:30:03.300332662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:30:03.300465 env[1561]: time="2025-11-01T01:30:03.300408690Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/26d88e78bdb7dde573cd726943ecdf02363a83ec24b13e8889ef3e388368c725 pid=5292 runtime=io.containerd.runc.v2 Nov 1 01:30:03.302000 audit[5306]: NETFILTER_CFG table=filter:114 family=2 entries=58 op=nft_register_chain pid=5306 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 01:30:03.302000 audit[5306]: SYSCALL arch=c000003e syscall=46 success=yes exit=27288 a0=3 a1=7ffe499ab1d0 a2=0 a3=7ffe499ab1bc items=0 ppid=4216 pid=5306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:03.302000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 01:30:03.305992 systemd[1]: Started cri-containerd-26d88e78bdb7dde573cd726943ecdf02363a83ec24b13e8889ef3e388368c725.scope. Nov 1 01:30:03.311859 kubelet[2505]: E1101 01:30:03.311834 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-rhjss" podUID="438a7b01-7b7b-439d-a5c9-a6d4d681a41f" Nov 1 01:30:03.311859 kubelet[2505]: E1101 01:30:03.311834 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6xhxh" podUID="9aac0066-097a-4582-8ce8-a3a1ddb41b3d" Nov 1 01:30:03.312063 kubelet[2505]: E1101 01:30:03.312050 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9wz7k" podUID="79df0ba2-6e86-422c-8f93-652dfb942b69" Nov 1 01:30:03.311000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.311000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.311000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.311000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.311000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.311000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.311000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.311000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.311000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.311000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.311000 audit: BPF prog-id=188 op=LOAD Nov 1 01:30:03.311000 audit[5302]: AVC avc: denied { bpf } for pid=5302 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.311000 audit[5302]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=5292 pid=5302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:03.311000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236643838653738626462376464653537336364373236393433656364 Nov 1 01:30:03.311000 audit[5302]: AVC avc: denied { perfmon } for pid=5302 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.311000 audit[5302]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=5292 pid=5302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:03.311000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236643838653738626462376464653537336364373236393433656364 Nov 1 01:30:03.311000 audit[5302]: AVC avc: denied { bpf } for pid=5302 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.311000 audit[5302]: AVC avc: denied { bpf } for pid=5302 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.311000 audit[5302]: AVC avc: denied { bpf } for pid=5302 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.311000 audit[5302]: AVC avc: denied { perfmon } for pid=5302 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.311000 audit[5302]: AVC avc: denied { perfmon } for pid=5302 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.311000 audit[5302]: AVC avc: denied { perfmon } for pid=5302 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.311000 audit[5302]: AVC avc: denied { perfmon } for pid=5302 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.311000 audit[5302]: AVC avc: denied { perfmon } for pid=5302 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.311000 audit[5302]: AVC avc: denied { bpf } for pid=5302 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.311000 audit[5302]: AVC avc: denied { bpf } for pid=5302 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.311000 audit: BPF prog-id=189 op=LOAD Nov 1 01:30:03.311000 audit[5302]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c000279d90 items=0 ppid=5292 pid=5302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:03.311000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236643838653738626462376464653537336364373236393433656364 Nov 1 01:30:03.311000 audit[5302]: AVC avc: denied { bpf } for pid=5302 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.311000 audit[5302]: AVC avc: denied { bpf } for pid=5302 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.311000 audit[5302]: AVC avc: denied { perfmon } for pid=5302 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.311000 audit[5302]: AVC avc: denied { perfmon } for pid=5302 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.311000 audit[5302]: AVC avc: denied { perfmon } for pid=5302 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.311000 audit[5302]: AVC avc: denied { perfmon } for pid=5302 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.311000 audit[5302]: AVC avc: denied { perfmon } for pid=5302 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.311000 audit[5302]: AVC avc: denied { bpf } for pid=5302 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.311000 audit[5302]: AVC avc: denied { bpf } for pid=5302 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.311000 audit: BPF prog-id=190 op=LOAD Nov 1 01:30:03.311000 audit[5302]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c000279dd8 items=0 ppid=5292 pid=5302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:03.311000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236643838653738626462376464653537336364373236393433656364 Nov 1 01:30:03.312000 audit: BPF prog-id=190 op=UNLOAD Nov 1 01:30:03.312000 audit: BPF prog-id=189 op=UNLOAD Nov 1 01:30:03.312000 audit[5302]: AVC avc: denied { bpf } for pid=5302 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.312000 audit[5302]: AVC avc: denied { bpf } for pid=5302 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.312000 audit[5302]: AVC avc: denied { bpf } for pid=5302 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.312000 audit[5302]: AVC avc: denied { perfmon } for pid=5302 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.312000 audit[5302]: AVC avc: denied { perfmon } for pid=5302 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.312000 audit[5302]: AVC avc: denied { perfmon } for pid=5302 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.312000 audit[5302]: AVC avc: denied { perfmon } for pid=5302 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.312000 audit[5302]: AVC avc: denied { perfmon } for pid=5302 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.312000 audit[5302]: AVC avc: denied { bpf } for pid=5302 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.312000 audit[5302]: AVC avc: denied { bpf } for pid=5302 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.312000 audit: BPF prog-id=191 op=LOAD Nov 1 01:30:03.312000 audit[5302]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c0003ec1e8 items=0 ppid=5292 pid=5302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:03.312000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236643838653738626462376464653537336364373236393433656364 Nov 1 01:30:03.334158 env[1561]: time="2025-11-01T01:30:03.334132644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-54j2g,Uid:c8ca38b0-d8b7-4714-8abe-3e911e8eec29,Namespace:kube-system,Attempt:1,} returns sandbox id \"26d88e78bdb7dde573cd726943ecdf02363a83ec24b13e8889ef3e388368c725\"" Nov 1 01:30:03.336244 env[1561]: time="2025-11-01T01:30:03.336228236Z" level=info msg="CreateContainer within sandbox \"26d88e78bdb7dde573cd726943ecdf02363a83ec24b13e8889ef3e388368c725\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 01:30:03.340685 env[1561]: time="2025-11-01T01:30:03.340639705Z" level=info msg="CreateContainer within sandbox \"26d88e78bdb7dde573cd726943ecdf02363a83ec24b13e8889ef3e388368c725\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d61036bc817943cc1a661436e7cf831c6d527b937cd8eb4963a03466ab55dc19\"" Nov 1 01:30:03.340914 env[1561]: time="2025-11-01T01:30:03.340899772Z" level=info msg="StartContainer for \"d61036bc817943cc1a661436e7cf831c6d527b937cd8eb4963a03466ab55dc19\"" Nov 1 01:30:03.348646 systemd[1]: Started cri-containerd-d61036bc817943cc1a661436e7cf831c6d527b937cd8eb4963a03466ab55dc19.scope. Nov 1 01:30:03.353000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.353000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.353000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.353000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.353000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.353000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.353000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.353000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.353000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.354000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.354000 audit: BPF prog-id=192 op=LOAD Nov 1 01:30:03.354000 audit[5334]: AVC avc: denied { bpf } for pid=5334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.354000 audit[5334]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000147c48 a2=10 a3=1c items=0 ppid=5292 pid=5334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:03.354000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436313033366263383137393433636331613636313433366537636638 Nov 1 01:30:03.354000 audit[5334]: AVC avc: denied { perfmon } for pid=5334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.354000 audit[5334]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001476b0 a2=3c a3=8 items=0 ppid=5292 pid=5334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:03.354000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436313033366263383137393433636331613636313433366537636638 Nov 1 01:30:03.354000 audit[5334]: AVC avc: denied { bpf } for pid=5334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.354000 audit[5334]: AVC avc: denied { bpf } for pid=5334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.354000 audit[5334]: AVC avc: denied { bpf } for pid=5334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.354000 audit[5334]: AVC avc: denied { perfmon } for pid=5334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.354000 audit[5334]: AVC avc: denied { perfmon } for pid=5334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.354000 audit[5334]: AVC avc: denied { perfmon } for pid=5334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.354000 audit[5334]: AVC avc: denied { perfmon } for pid=5334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.354000 audit[5334]: AVC avc: denied { perfmon } for pid=5334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.354000 audit[5334]: AVC avc: denied { bpf } for pid=5334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.354000 audit[5334]: AVC avc: denied { bpf } for pid=5334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.354000 audit: BPF prog-id=193 op=LOAD Nov 1 01:30:03.354000 audit[5334]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001479d8 a2=78 a3=c0002c3e10 items=0 ppid=5292 pid=5334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:03.354000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436313033366263383137393433636331613636313433366537636638 Nov 1 01:30:03.354000 audit[5334]: AVC avc: denied { bpf } for pid=5334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.354000 audit[5334]: AVC avc: denied { bpf } for pid=5334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.354000 audit[5334]: AVC avc: denied { perfmon } for pid=5334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.354000 audit[5334]: AVC avc: denied { perfmon } for pid=5334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.354000 audit[5334]: AVC avc: denied { perfmon } for pid=5334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.354000 audit[5334]: AVC avc: denied { perfmon } for pid=5334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.354000 audit[5334]: AVC avc: denied { perfmon } for pid=5334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.354000 audit[5334]: AVC avc: denied { bpf } for pid=5334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.354000 audit[5334]: AVC avc: denied { bpf } for pid=5334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.354000 audit: BPF prog-id=194 op=LOAD Nov 1 01:30:03.354000 audit[5334]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000147770 a2=78 a3=c0002c3e58 items=0 ppid=5292 pid=5334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:03.354000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436313033366263383137393433636331613636313433366537636638 Nov 1 01:30:03.354000 audit: BPF prog-id=194 op=UNLOAD Nov 1 01:30:03.354000 audit: BPF prog-id=193 op=UNLOAD Nov 1 01:30:03.354000 audit[5334]: AVC avc: denied { bpf } for pid=5334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.354000 audit[5334]: AVC avc: denied { bpf } for pid=5334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.354000 audit[5334]: AVC avc: denied { bpf } for pid=5334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.354000 audit[5334]: AVC avc: denied { perfmon } for pid=5334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.354000 audit[5334]: AVC avc: denied { perfmon } for pid=5334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.354000 audit[5334]: AVC avc: denied { perfmon } for pid=5334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.354000 audit[5334]: AVC avc: denied { perfmon } for pid=5334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.354000 audit[5334]: AVC avc: denied { perfmon } for pid=5334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.354000 audit[5334]: AVC avc: denied { bpf } for pid=5334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.354000 audit[5334]: AVC avc: denied { bpf } for pid=5334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:03.354000 audit: BPF prog-id=195 op=LOAD Nov 1 01:30:03.354000 audit[5334]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000147c30 a2=78 a3=c0002fe268 items=0 ppid=5292 pid=5334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:03.354000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436313033366263383137393433636331613636313433366537636638 Nov 1 01:30:03.361557 env[1561]: time="2025-11-01T01:30:03.361526967Z" level=info msg="StartContainer for \"d61036bc817943cc1a661436e7cf831c6d527b937cd8eb4963a03466ab55dc19\" returns successfully" Nov 1 01:30:03.366000 audit[5344]: AVC avc: denied { getattr } for pid=5344 comm="coredns" path="cgroup:[4026532759]" dev="nsfs" ino=4026532759 scontext=system_u:system_r:svirt_lxc_net_t:s0:c145,c400 tcontext=system_u:object_r:nsfs_t:s0 tclass=file permissive=0 Nov 1 01:30:03.366000 audit[5344]: SYSCALL arch=c000003e syscall=262 success=no exit=-13 a0=ffffffffffffff9c a1=c000132f78 a2=c00072d148 a3=0 items=0 ppid=5292 pid=5344 auid=4294967295 uid=65532 gid=65532 euid=65532 suid=65532 fsuid=65532 egid=65532 sgid=65532 fsgid=65532 tty=(none) ses=4294967295 comm="coredns" exe="/coredns" subj=system_u:system_r:svirt_lxc_net_t:s0:c145,c400 key=(null) Nov 1 01:30:03.366000 audit: PROCTITLE proctitle=2F636F7265646E73002D636F6E66002F6574632F636F7265646E732F436F726566696C65 Nov 1 01:30:03.633694 env[1561]: time="2025-11-01T01:30:03.633621059Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:30:03.634211 env[1561]: time="2025-11-01T01:30:03.634164644Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:30:03.634375 kubelet[2505]: E1101 01:30:03.634343 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:30:03.634465 kubelet[2505]: E1101 01:30:03.634386 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:30:03.634528 kubelet[2505]: E1101 01:30:03.634487 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7b6cfc8885-lqwd9_calico-apiserver(cb08aa02-32db-4371-b5cc-c9a5a7fd22c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:30:03.634585 kubelet[2505]: E1101 01:30:03.634532 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-lqwd9" podUID="cb08aa02-32db-4371-b5cc-c9a5a7fd22c8" Nov 1 01:30:03.826507 systemd-networkd[1315]: cali0e36c40109f: Gained IPv6LL Nov 1 01:30:03.918000 audit[5373]: NETFILTER_CFG table=filter:115 family=2 entries=20 op=nft_register_rule pid=5373 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:30:03.918000 audit[5373]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd1cd04730 a2=0 a3=7ffd1cd0471c items=0 ppid=2709 pid=5373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:03.918000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:30:03.934000 audit[5373]: NETFILTER_CFG table=nat:116 family=2 entries=14 op=nft_register_rule pid=5373 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:30:03.934000 audit[5373]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd1cd04730 a2=0 a3=0 items=0 ppid=2709 pid=5373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:03.934000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:30:03.954626 systemd-networkd[1315]: cali7a7f4d59257: Gained IPv6LL Nov 1 01:30:04.154916 env[1561]: time="2025-11-01T01:30:04.154828088Z" level=info msg="StopPodSandbox for \"72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac\"" Nov 1 01:30:04.256791 env[1561]: 2025-11-01 01:30:04.228 [INFO][5385] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac" Nov 1 01:30:04.256791 env[1561]: 2025-11-01 01:30:04.228 [INFO][5385] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac" iface="eth0" netns="/var/run/netns/cni-eb33e09f-1131-d1a8-e60c-1ffb963328d2" Nov 1 01:30:04.256791 env[1561]: 2025-11-01 01:30:04.228 [INFO][5385] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac" iface="eth0" netns="/var/run/netns/cni-eb33e09f-1131-d1a8-e60c-1ffb963328d2" Nov 1 01:30:04.256791 env[1561]: 2025-11-01 01:30:04.228 [INFO][5385] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac" iface="eth0" netns="/var/run/netns/cni-eb33e09f-1131-d1a8-e60c-1ffb963328d2" Nov 1 01:30:04.256791 env[1561]: 2025-11-01 01:30:04.228 [INFO][5385] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac" Nov 1 01:30:04.256791 env[1561]: 2025-11-01 01:30:04.228 [INFO][5385] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac" Nov 1 01:30:04.256791 env[1561]: 2025-11-01 01:30:04.246 [INFO][5403] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac" HandleID="k8s-pod-network.72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac" Workload="ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--qqpf5-eth0" Nov 1 01:30:04.256791 env[1561]: 2025-11-01 01:30:04.246 [INFO][5403] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:30:04.256791 env[1561]: 2025-11-01 01:30:04.246 [INFO][5403] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:30:04.256791 env[1561]: 2025-11-01 01:30:04.252 [WARNING][5403] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac" HandleID="k8s-pod-network.72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac" Workload="ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--qqpf5-eth0" Nov 1 01:30:04.256791 env[1561]: 2025-11-01 01:30:04.253 [INFO][5403] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac" HandleID="k8s-pod-network.72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac" Workload="ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--qqpf5-eth0" Nov 1 01:30:04.256791 env[1561]: 2025-11-01 01:30:04.254 [INFO][5403] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:30:04.256791 env[1561]: 2025-11-01 01:30:04.255 [INFO][5385] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac" Nov 1 01:30:04.257274 env[1561]: time="2025-11-01T01:30:04.256865871Z" level=info msg="TearDown network for sandbox \"72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac\" successfully" Nov 1 01:30:04.257274 env[1561]: time="2025-11-01T01:30:04.256894310Z" level=info msg="StopPodSandbox for \"72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac\" returns successfully" Nov 1 01:30:04.258159 env[1561]: time="2025-11-01T01:30:04.258135294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qqpf5,Uid:4c87fc13-f2aa-4700-9b99-82cf119d8f7d,Namespace:kube-system,Attempt:1,}" Nov 1 01:30:04.259472 systemd[1]: run-netns-cni\x2deb33e09f\x2d1131\x2dd1a8\x2de60c\x2d1ffb963328d2.mount: Deactivated successfully. Nov 1 01:30:04.314127 kubelet[2505]: E1101 01:30:04.314102 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-rhjss" podUID="438a7b01-7b7b-439d-a5c9-a6d4d681a41f" Nov 1 01:30:04.314418 kubelet[2505]: E1101 01:30:04.314152 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-lqwd9" podUID="cb08aa02-32db-4371-b5cc-c9a5a7fd22c8" Nov 1 01:30:04.334018 systemd-networkd[1315]: calib63b4c09861: Link UP Nov 1 01:30:04.336364 kubelet[2505]: I1101 01:30:04.336325 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-54j2g" podStartSLOduration=40.336307923 podStartE2EDuration="40.336307923s" podCreationTimestamp="2025-11-01 01:29:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:30:04.335852234 +0000 UTC m=+44.241664429" watchObservedRunningTime="2025-11-01 01:30:04.336307923 +0000 UTC m=+44.242120111" Nov 1 01:30:04.383590 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 01:30:04.383680 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calib63b4c09861: link becomes ready Nov 1 01:30:04.383798 systemd-networkd[1315]: calib63b4c09861: Gained carrier Nov 1 01:30:04.390803 env[1561]: 2025-11-01 01:30:04.286 [INFO][5422] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--qqpf5-eth0 coredns-66bc5c9577- kube-system 4c87fc13-f2aa-4700-9b99-82cf119d8f7d 996 0 2025-11-01 01:29:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.8-n-34cd8b9336 coredns-66bc5c9577-qqpf5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib63b4c09861 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="941b71ba749354271380599cda33950df6594414a4a18688e5f024ad833fe275" Namespace="kube-system" Pod="coredns-66bc5c9577-qqpf5" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--qqpf5-" Nov 1 01:30:04.390803 env[1561]: 2025-11-01 01:30:04.286 [INFO][5422] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="941b71ba749354271380599cda33950df6594414a4a18688e5f024ad833fe275" Namespace="kube-system" Pod="coredns-66bc5c9577-qqpf5" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--qqpf5-eth0" Nov 1 01:30:04.390803 env[1561]: 2025-11-01 01:30:04.303 [INFO][5449] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="941b71ba749354271380599cda33950df6594414a4a18688e5f024ad833fe275" HandleID="k8s-pod-network.941b71ba749354271380599cda33950df6594414a4a18688e5f024ad833fe275" Workload="ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--qqpf5-eth0" Nov 1 01:30:04.390803 env[1561]: 2025-11-01 01:30:04.303 [INFO][5449] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="941b71ba749354271380599cda33950df6594414a4a18688e5f024ad833fe275" HandleID="k8s-pod-network.941b71ba749354271380599cda33950df6594414a4a18688e5f024ad833fe275" Workload="ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--qqpf5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000503d40), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.8-n-34cd8b9336", "pod":"coredns-66bc5c9577-qqpf5", "timestamp":"2025-11-01 01:30:04.303348978 +0000 UTC"}, Hostname:"ci-3510.3.8-n-34cd8b9336", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:30:04.390803 env[1561]: 2025-11-01 01:30:04.303 [INFO][5449] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:30:04.390803 env[1561]: 2025-11-01 01:30:04.303 [INFO][5449] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:30:04.390803 env[1561]: 2025-11-01 01:30:04.303 [INFO][5449] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-34cd8b9336' Nov 1 01:30:04.390803 env[1561]: 2025-11-01 01:30:04.309 [INFO][5449] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.941b71ba749354271380599cda33950df6594414a4a18688e5f024ad833fe275" host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:04.390803 env[1561]: 2025-11-01 01:30:04.313 [INFO][5449] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:04.390803 env[1561]: 2025-11-01 01:30:04.316 [INFO][5449] ipam/ipam.go 511: Trying affinity for 192.168.114.0/26 host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:04.390803 env[1561]: 2025-11-01 01:30:04.318 [INFO][5449] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.0/26 host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:04.390803 env[1561]: 2025-11-01 01:30:04.321 [INFO][5449] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:04.390803 env[1561]: 2025-11-01 01:30:04.321 [INFO][5449] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.941b71ba749354271380599cda33950df6594414a4a18688e5f024ad833fe275" host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:04.390803 env[1561]: 2025-11-01 01:30:04.323 [INFO][5449] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.941b71ba749354271380599cda33950df6594414a4a18688e5f024ad833fe275 Nov 1 01:30:04.390803 env[1561]: 2025-11-01 01:30:04.326 [INFO][5449] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.941b71ba749354271380599cda33950df6594414a4a18688e5f024ad833fe275" host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:04.390803 env[1561]: 2025-11-01 01:30:04.331 [INFO][5449] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.8/26] block=192.168.114.0/26 handle="k8s-pod-network.941b71ba749354271380599cda33950df6594414a4a18688e5f024ad833fe275" host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:04.390803 env[1561]: 2025-11-01 01:30:04.331 [INFO][5449] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.8/26] handle="k8s-pod-network.941b71ba749354271380599cda33950df6594414a4a18688e5f024ad833fe275" host="ci-3510.3.8-n-34cd8b9336" Nov 1 01:30:04.390803 env[1561]: 2025-11-01 01:30:04.331 [INFO][5449] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:30:04.390803 env[1561]: 2025-11-01 01:30:04.331 [INFO][5449] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.8/26] IPv6=[] ContainerID="941b71ba749354271380599cda33950df6594414a4a18688e5f024ad833fe275" HandleID="k8s-pod-network.941b71ba749354271380599cda33950df6594414a4a18688e5f024ad833fe275" Workload="ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--qqpf5-eth0" Nov 1 01:30:04.391306 env[1561]: 2025-11-01 01:30:04.332 [INFO][5422] cni-plugin/k8s.go 418: Populated endpoint ContainerID="941b71ba749354271380599cda33950df6594414a4a18688e5f024ad833fe275" Namespace="kube-system" Pod="coredns-66bc5c9577-qqpf5" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--qqpf5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--qqpf5-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"4c87fc13-f2aa-4700-9b99-82cf119d8f7d", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 29, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34cd8b9336", ContainerID:"", Pod:"coredns-66bc5c9577-qqpf5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib63b4c09861", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:30:04.391306 env[1561]: 2025-11-01 01:30:04.333 [INFO][5422] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.8/32] ContainerID="941b71ba749354271380599cda33950df6594414a4a18688e5f024ad833fe275" Namespace="kube-system" Pod="coredns-66bc5c9577-qqpf5" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--qqpf5-eth0" Nov 1 01:30:04.391306 env[1561]: 2025-11-01 01:30:04.333 [INFO][5422] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib63b4c09861 ContainerID="941b71ba749354271380599cda33950df6594414a4a18688e5f024ad833fe275" Namespace="kube-system" Pod="coredns-66bc5c9577-qqpf5" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--qqpf5-eth0" Nov 1 01:30:04.391306 env[1561]: 2025-11-01 01:30:04.383 [INFO][5422] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="941b71ba749354271380599cda33950df6594414a4a18688e5f024ad833fe275" Namespace="kube-system" Pod="coredns-66bc5c9577-qqpf5" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--qqpf5-eth0" Nov 1 01:30:04.391306 env[1561]: 2025-11-01 01:30:04.384 [INFO][5422] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="941b71ba749354271380599cda33950df6594414a4a18688e5f024ad833fe275" Namespace="kube-system" Pod="coredns-66bc5c9577-qqpf5" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--qqpf5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--qqpf5-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"4c87fc13-f2aa-4700-9b99-82cf119d8f7d", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 29, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34cd8b9336", ContainerID:"941b71ba749354271380599cda33950df6594414a4a18688e5f024ad833fe275", Pod:"coredns-66bc5c9577-qqpf5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib63b4c09861", MAC:"3e:c7:24:d8:96:6c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:30:04.391479 env[1561]: 2025-11-01 01:30:04.389 [INFO][5422] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="941b71ba749354271380599cda33950df6594414a4a18688e5f024ad833fe275" Namespace="kube-system" Pod="coredns-66bc5c9577-qqpf5" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--qqpf5-eth0" Nov 1 01:30:04.395456 env[1561]: time="2025-11-01T01:30:04.395416679Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:30:04.395456 env[1561]: time="2025-11-01T01:30:04.395441524Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:30:04.395456 env[1561]: time="2025-11-01T01:30:04.395448812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:30:04.395588 env[1561]: time="2025-11-01T01:30:04.395512051Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/941b71ba749354271380599cda33950df6594414a4a18688e5f024ad833fe275 pid=5484 runtime=io.containerd.runc.v2 Nov 1 01:30:04.397000 audit[5501]: NETFILTER_CFG table=filter:117 family=2 entries=58 op=nft_register_chain pid=5501 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 01:30:04.397000 audit[5501]: SYSCALL arch=c000003e syscall=46 success=yes exit=26744 a0=3 a1=7ffcdf15c080 a2=0 a3=7ffcdf15c06c items=0 ppid=4216 pid=5501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:04.397000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 01:30:04.401198 systemd[1]: Started cri-containerd-941b71ba749354271380599cda33950df6594414a4a18688e5f024ad833fe275.scope. Nov 1 01:30:04.406000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.406000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.406000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.406000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.406000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.406000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.406000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.406000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.406000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.406000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.406000 audit: BPF prog-id=196 op=LOAD Nov 1 01:30:04.406000 audit[5494]: AVC avc: denied { bpf } for pid=5494 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.406000 audit[5494]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c00023dc48 a2=10 a3=1c items=0 ppid=5484 pid=5494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:04.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934316237316261373439333534323731333830353939636461333339 Nov 1 01:30:04.406000 audit[5494]: AVC avc: denied { perfmon } for pid=5494 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.406000 audit[5494]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c00023d6b0 a2=3c a3=c items=0 ppid=5484 pid=5494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:04.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934316237316261373439333534323731333830353939636461333339 Nov 1 01:30:04.406000 audit[5494]: AVC avc: denied { bpf } for pid=5494 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.406000 audit[5494]: AVC avc: denied { bpf } for pid=5494 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.406000 audit[5494]: AVC avc: denied { bpf } for pid=5494 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.406000 audit[5494]: AVC avc: denied { perfmon } for pid=5494 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.406000 audit[5494]: AVC avc: denied { perfmon } for pid=5494 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.406000 audit[5494]: AVC avc: denied { perfmon } for pid=5494 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.406000 audit[5494]: AVC avc: denied { perfmon } for pid=5494 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.406000 audit[5494]: AVC avc: denied { perfmon } for pid=5494 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.406000 audit[5494]: AVC avc: denied { bpf } for pid=5494 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.406000 audit[5494]: AVC avc: denied { bpf } for pid=5494 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.406000 audit: BPF prog-id=197 op=LOAD Nov 1 01:30:04.406000 audit[5494]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00023d9d8 a2=78 a3=c0000f9d90 items=0 ppid=5484 pid=5494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:04.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934316237316261373439333534323731333830353939636461333339 Nov 1 01:30:04.406000 audit[5494]: AVC avc: denied { bpf } for pid=5494 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.406000 audit[5494]: AVC avc: denied { bpf } for pid=5494 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.406000 audit[5494]: AVC avc: denied { perfmon } for pid=5494 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.406000 audit[5494]: AVC avc: denied { perfmon } for pid=5494 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.406000 audit[5494]: AVC avc: denied { perfmon } for pid=5494 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.406000 audit[5494]: AVC avc: denied { perfmon } for pid=5494 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.406000 audit[5494]: AVC avc: denied { perfmon } for pid=5494 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.406000 audit[5494]: AVC avc: denied { bpf } for pid=5494 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.406000 audit[5494]: AVC avc: denied { bpf } for pid=5494 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.406000 audit: BPF prog-id=198 op=LOAD Nov 1 01:30:04.406000 audit[5494]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00023d770 a2=78 a3=c0000f9dd8 items=0 ppid=5484 pid=5494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:04.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934316237316261373439333534323731333830353939636461333339 Nov 1 01:30:04.406000 audit: BPF prog-id=198 op=UNLOAD Nov 1 01:30:04.406000 audit: BPF prog-id=197 op=UNLOAD Nov 1 01:30:04.406000 audit[5494]: AVC avc: denied { bpf } for pid=5494 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.406000 audit[5494]: AVC avc: denied { bpf } for pid=5494 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.406000 audit[5494]: AVC avc: denied { bpf } for pid=5494 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.406000 audit[5494]: AVC avc: denied { perfmon } for pid=5494 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.406000 audit[5494]: AVC avc: denied { perfmon } for pid=5494 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.406000 audit[5494]: AVC avc: denied { perfmon } for pid=5494 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.406000 audit[5494]: AVC avc: denied { perfmon } for pid=5494 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.406000 audit[5494]: AVC avc: denied { perfmon } for pid=5494 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.406000 audit[5494]: AVC avc: denied { bpf } for pid=5494 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.406000 audit[5494]: AVC avc: denied { bpf } for pid=5494 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.406000 audit: BPF prog-id=199 op=LOAD Nov 1 01:30:04.406000 audit[5494]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00023dc30 a2=78 a3=c0003ee1e8 items=0 ppid=5484 pid=5494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:04.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934316237316261373439333534323731333830353939636461333339 Nov 1 01:30:04.424550 env[1561]: time="2025-11-01T01:30:04.424510138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qqpf5,Uid:4c87fc13-f2aa-4700-9b99-82cf119d8f7d,Namespace:kube-system,Attempt:1,} returns sandbox id \"941b71ba749354271380599cda33950df6594414a4a18688e5f024ad833fe275\"" Nov 1 01:30:04.427122 env[1561]: time="2025-11-01T01:30:04.427100743Z" level=info msg="CreateContainer within sandbox \"941b71ba749354271380599cda33950df6594414a4a18688e5f024ad833fe275\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 01:30:04.431315 env[1561]: time="2025-11-01T01:30:04.431271407Z" level=info msg="CreateContainer within sandbox \"941b71ba749354271380599cda33950df6594414a4a18688e5f024ad833fe275\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b2c3ab97ad706a6e1aa268b17e1fe07d1eea40a0441ce3ac7f7fc33c72851789\"" Nov 1 01:30:04.431513 env[1561]: time="2025-11-01T01:30:04.431483190Z" level=info msg="StartContainer for \"b2c3ab97ad706a6e1aa268b17e1fe07d1eea40a0441ce3ac7f7fc33c72851789\"" Nov 1 01:30:04.438621 systemd[1]: Started cri-containerd-b2c3ab97ad706a6e1aa268b17e1fe07d1eea40a0441ce3ac7f7fc33c72851789.scope. Nov 1 01:30:04.444000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.444000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.444000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.444000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.444000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.444000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.444000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.444000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.444000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.444000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.444000 audit: BPF prog-id=200 op=LOAD Nov 1 01:30:04.444000 audit[5525]: AVC avc: denied { bpf } for pid=5525 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.444000 audit[5525]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000145c48 a2=10 a3=1c items=0 ppid=5484 pid=5525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:04.444000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232633361623937616437303661366531616132363862313765316665 Nov 1 01:30:04.444000 audit[5525]: AVC avc: denied { perfmon } for pid=5525 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.444000 audit[5525]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001456b0 a2=3c a3=8 items=0 ppid=5484 pid=5525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:04.444000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232633361623937616437303661366531616132363862313765316665 Nov 1 01:30:04.444000 audit[5525]: AVC avc: denied { bpf } for pid=5525 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.444000 audit[5525]: AVC avc: denied { bpf } for pid=5525 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.444000 audit[5525]: AVC avc: denied { bpf } for pid=5525 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.444000 audit[5525]: AVC avc: denied { perfmon } for pid=5525 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.444000 audit[5525]: AVC avc: denied { perfmon } for pid=5525 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.444000 audit[5525]: AVC avc: denied { perfmon } for pid=5525 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.444000 audit[5525]: AVC avc: denied { perfmon } for pid=5525 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.444000 audit[5525]: AVC avc: denied { perfmon } for pid=5525 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.444000 audit[5525]: AVC avc: denied { bpf } for pid=5525 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.444000 audit[5525]: AVC avc: denied { bpf } for pid=5525 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.444000 audit: BPF prog-id=201 op=LOAD Nov 1 01:30:04.444000 audit[5525]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001459d8 a2=78 a3=c00024de10 items=0 ppid=5484 pid=5525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:04.444000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232633361623937616437303661366531616132363862313765316665 Nov 1 01:30:04.444000 audit[5525]: AVC avc: denied { bpf } for pid=5525 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.444000 audit[5525]: AVC avc: denied { bpf } for pid=5525 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.444000 audit[5525]: AVC avc: denied { perfmon } for pid=5525 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.444000 audit[5525]: AVC avc: denied { perfmon } for pid=5525 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.444000 audit[5525]: AVC avc: denied { perfmon } for pid=5525 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.444000 audit[5525]: AVC avc: denied { perfmon } for pid=5525 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.444000 audit[5525]: AVC avc: denied { perfmon } for pid=5525 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.444000 audit[5525]: AVC avc: denied { bpf } for pid=5525 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.444000 audit[5525]: AVC avc: denied { bpf } for pid=5525 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.444000 audit: BPF prog-id=202 op=LOAD Nov 1 01:30:04.444000 audit[5525]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000145770 a2=78 a3=c00024de58 items=0 ppid=5484 pid=5525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:04.444000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232633361623937616437303661366531616132363862313765316665 Nov 1 01:30:04.444000 audit: BPF prog-id=202 op=UNLOAD Nov 1 01:30:04.444000 audit: BPF prog-id=201 op=UNLOAD Nov 1 01:30:04.444000 audit[5525]: AVC avc: denied { bpf } for pid=5525 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.444000 audit[5525]: AVC avc: denied { bpf } for pid=5525 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.444000 audit[5525]: AVC avc: denied { bpf } for pid=5525 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.444000 audit[5525]: AVC avc: denied { perfmon } for pid=5525 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.444000 audit[5525]: AVC avc: denied { perfmon } for pid=5525 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.444000 audit[5525]: AVC avc: denied { perfmon } for pid=5525 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.444000 audit[5525]: AVC avc: denied { perfmon } for pid=5525 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.444000 audit[5525]: AVC avc: denied { perfmon } for pid=5525 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.444000 audit[5525]: AVC avc: denied { bpf } for pid=5525 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.444000 audit[5525]: AVC avc: denied { bpf } for pid=5525 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:30:04.444000 audit: BPF prog-id=203 op=LOAD Nov 1 01:30:04.444000 audit[5525]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000145c30 a2=78 a3=c000398268 items=0 ppid=5484 pid=5525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:04.444000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232633361623937616437303661366531616132363862313765316665 Nov 1 01:30:04.452346 env[1561]: time="2025-11-01T01:30:04.452315088Z" level=info msg="StartContainer for \"b2c3ab97ad706a6e1aa268b17e1fe07d1eea40a0441ce3ac7f7fc33c72851789\" returns successfully" Nov 1 01:30:04.456000 audit[5535]: AVC avc: denied { getattr } for pid=5535 comm="coredns" path="cgroup:[4026532924]" dev="nsfs" ino=4026532924 scontext=system_u:system_r:svirt_lxc_net_t:s0:c66,c480 tcontext=system_u:object_r:nsfs_t:s0 tclass=file permissive=0 Nov 1 01:30:04.456000 audit[5535]: SYSCALL arch=c000003e syscall=262 success=no exit=-13 a0=ffffffffffffff9c a1=c00017eb70 a2=c00024a9f8 a3=0 items=0 ppid=5484 pid=5535 auid=4294967295 uid=65532 gid=65532 euid=65532 suid=65532 fsuid=65532 egid=65532 sgid=65532 fsgid=65532 tty=(none) ses=4294967295 comm="coredns" exe="/coredns" subj=system_u:system_r:svirt_lxc_net_t:s0:c66,c480 key=(null) Nov 1 01:30:04.456000 audit: PROCTITLE proctitle=2F636F7265646E73002D636F6E66002F6574632F636F7265646E732F436F726566696C65 Nov 1 01:30:04.960000 audit[5565]: NETFILTER_CFG table=filter:118 family=2 entries=20 op=nft_register_rule pid=5565 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:30:04.960000 audit[5565]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe3b5bb560 a2=0 a3=7ffe3b5bb54c items=0 ppid=2709 pid=5565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:04.960000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:30:04.972000 audit[5565]: NETFILTER_CFG table=nat:119 family=2 entries=14 op=nft_register_rule pid=5565 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:30:04.972000 audit[5565]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe3b5bb560 a2=0 a3=0 items=0 ppid=2709 pid=5565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:04.972000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:30:05.106732 systemd-networkd[1315]: calieb77a8bad17: Gained IPv6LL Nov 1 01:30:05.345643 kubelet[2505]: I1101 01:30:05.345359 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-qqpf5" podStartSLOduration=41.345322603 podStartE2EDuration="41.345322603s" podCreationTimestamp="2025-11-01 01:29:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:30:05.34506596 +0000 UTC m=+45.250878245" watchObservedRunningTime="2025-11-01 01:30:05.345322603 +0000 UTC m=+45.251134870" Nov 1 01:30:05.999000 audit[5567]: NETFILTER_CFG table=filter:120 family=2 entries=17 op=nft_register_rule pid=5567 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:30:05.999000 audit[5567]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe83bc3ea0 a2=0 a3=7ffe83bc3e8c items=0 ppid=2709 pid=5567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:05.999000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:30:06.011000 audit[5567]: NETFILTER_CFG table=nat:121 family=2 entries=47 op=nft_register_chain pid=5567 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:30:06.011000 audit[5567]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffe83bc3ea0 a2=0 a3=7ffe83bc3e8c items=0 ppid=2709 pid=5567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:30:06.011000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:30:06.194563 systemd-networkd[1315]: calib63b4c09861: Gained IPv6LL Nov 1 01:30:14.154181 env[1561]: time="2025-11-01T01:30:14.154097180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 01:30:14.524609 env[1561]: time="2025-11-01T01:30:14.524455360Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:30:14.525615 env[1561]: time="2025-11-01T01:30:14.525392358Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 01:30:14.526112 kubelet[2505]: E1101 01:30:14.525999 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:30:14.526112 kubelet[2505]: E1101 01:30:14.526105 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:30:14.527324 kubelet[2505]: E1101 01:30:14.526281 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-649c6d6f48-6pq8q_calico-system(2743a542-7119-47a8-937d-fec5c85bdcf2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 01:30:14.528414 env[1561]: time="2025-11-01T01:30:14.528315031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 01:30:14.900659 env[1561]: time="2025-11-01T01:30:14.900371401Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:30:14.901243 env[1561]: time="2025-11-01T01:30:14.901136687Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 01:30:14.901753 kubelet[2505]: E1101 01:30:14.901626 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:30:14.901753 kubelet[2505]: E1101 01:30:14.901720 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:30:14.902121 kubelet[2505]: E1101 01:30:14.901886 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-649c6d6f48-6pq8q_calico-system(2743a542-7119-47a8-937d-fec5c85bdcf2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 01:30:14.902121 kubelet[2505]: E1101 01:30:14.901989 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649c6d6f48-6pq8q" podUID="2743a542-7119-47a8-937d-fec5c85bdcf2" Nov 1 01:30:15.156323 env[1561]: time="2025-11-01T01:30:15.156106658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 01:30:15.514812 env[1561]: time="2025-11-01T01:30:15.514666692Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:30:15.515622 env[1561]: time="2025-11-01T01:30:15.515482436Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 01:30:15.516055 kubelet[2505]: E1101 01:30:15.515949 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:30:15.516292 kubelet[2505]: E1101 01:30:15.516073 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:30:15.516538 kubelet[2505]: E1101 01:30:15.516446 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5bff9f9fd4-4vmq2_calico-system(593908a5-f718-4b03-b095-540ff204a4bd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 01:30:15.516855 kubelet[2505]: E1101 01:30:15.516576 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bff9f9fd4-4vmq2" podUID="593908a5-f718-4b03-b095-540ff204a4bd" Nov 1 01:30:15.517167 env[1561]: time="2025-11-01T01:30:15.516810588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 01:30:15.867000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:30:15.884895 env[1561]: time="2025-11-01T01:30:15.884820985Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:30:15.885295 env[1561]: time="2025-11-01T01:30:15.885262390Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 01:30:15.885468 kubelet[2505]: E1101 01:30:15.885418 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:30:15.885468 kubelet[2505]: E1101 01:30:15.885447 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:30:15.885716 kubelet[2505]: E1101 01:30:15.885575 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-9wz7k_calico-system(79df0ba2-6e86-422c-8f93-652dfb942b69): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 01:30:15.885761 env[1561]: time="2025-11-01T01:30:15.885647970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 01:30:15.891456 kernel: kauditd_printk_skb: 402 callbacks suppressed Nov 1 01:30:15.891497 kernel: audit: type=1400 audit(1761960615.867:1315): avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:30:15.867000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0031196e0 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:30:16.075669 kernel: audit: type=1300 audit(1761960615.867:1315): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0031196e0 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:30:16.075749 kernel: audit: type=1327 audit(1761960615.867:1315): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:30:15.867000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:30:15.867000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sda9" ino=520989 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:30:16.236564 kernel: audit: type=1400 audit(1761960615.867:1316): avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sda9" ino=520989 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:30:16.236620 kernel: audit: type=1300 audit(1761960615.867:1316): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0017e2570 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:30:15.867000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0017e2570 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:30:15.867000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:30:16.342385 env[1561]: time="2025-11-01T01:30:16.342363836Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:30:16.342768 env[1561]: time="2025-11-01T01:30:16.342723454Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 01:30:16.342880 kubelet[2505]: E1101 01:30:16.342827 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:30:16.342880 kubelet[2505]: E1101 01:30:16.342855 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:30:16.342958 kubelet[2505]: E1101 01:30:16.342939 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-6xhxh_calico-system(9aac0066-097a-4582-8ce8-a3a1ddb41b3d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 01:30:16.342991 kubelet[2505]: E1101 01:30:16.342965 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6xhxh" podUID="9aac0066-097a-4582-8ce8-a3a1ddb41b3d" Nov 1 01:30:16.343078 env[1561]: time="2025-11-01T01:30:16.343041372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:30:16.427562 kernel: audit: type=1327 audit(1761960615.867:1316): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:30:16.427622 kernel: audit: type=1400 audit(1761960615.898:1317): avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="sda9" ino=520991 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:30:15.898000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="sda9" ino=520991 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:30:15.898000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sda9" ino=520989 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:30:16.601651 kernel: audit: type=1400 audit(1761960615.898:1318): avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sda9" ino=520989 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:30:16.601716 kernel: audit: type=1300 audit(1761960615.898:1317): arch=c000003e syscall=254 success=no exit=-13 a0=73 a1=c01492cc60 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:30:15.898000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=73 a1=c01492cc60 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:30:16.696733 kernel: audit: type=1327 audit(1761960615.898:1317): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:30:15.898000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:30:16.738814 env[1561]: time="2025-11-01T01:30:16.738758411Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:30:16.739297 env[1561]: time="2025-11-01T01:30:16.739249509Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:30:16.739378 kubelet[2505]: E1101 01:30:16.739359 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:30:16.739413 kubelet[2505]: E1101 01:30:16.739385 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:30:16.739551 kubelet[2505]: E1101 01:30:16.739505 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7b6cfc8885-lqwd9_calico-apiserver(cb08aa02-32db-4371-b5cc-c9a5a7fd22c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:30:16.739551 kubelet[2505]: E1101 01:30:16.739538 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-lqwd9" podUID="cb08aa02-32db-4371-b5cc-c9a5a7fd22c8" Nov 1 01:30:16.739636 env[1561]: time="2025-11-01T01:30:16.739593599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 01:30:15.898000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=74 a1=c014427da0 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:30:15.898000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:30:15.898000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="sda9" ino=520985 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:30:15.898000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=73 a1=c0085a1260 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:30:15.898000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:30:15.898000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:30:15.898000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=74 a1=c0149b6fe0 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:30:15.898000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:30:15.899000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:30:15.899000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=73 a1=c01445d0c0 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:30:15.899000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:30:15.899000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sda9" ino=520989 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:30:15.899000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=74 a1=c01492cd20 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:30:15.899000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:30:17.122714 env[1561]: time="2025-11-01T01:30:17.122578337Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:30:17.124402 env[1561]: time="2025-11-01T01:30:17.124329729Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 01:30:17.124823 kubelet[2505]: E1101 01:30:17.124774 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:30:17.125338 kubelet[2505]: E1101 01:30:17.125303 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:30:17.125568 kubelet[2505]: E1101 01:30:17.125539 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-9wz7k_calico-system(79df0ba2-6e86-422c-8f93-652dfb942b69): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 01:30:17.126277 kubelet[2505]: E1101 01:30:17.126054 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9wz7k" podUID="79df0ba2-6e86-422c-8f93-652dfb942b69" Nov 1 01:30:18.156072 env[1561]: time="2025-11-01T01:30:18.155984870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:30:18.556601 env[1561]: time="2025-11-01T01:30:18.556502686Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:30:18.558096 env[1561]: time="2025-11-01T01:30:18.557993114Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:30:18.558734 kubelet[2505]: E1101 01:30:18.558634 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:30:18.559540 kubelet[2505]: E1101 01:30:18.558752 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:30:18.559540 kubelet[2505]: E1101 01:30:18.559013 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7b6cfc8885-rhjss_calico-apiserver(438a7b01-7b7b-439d-a5c9-a6d4d681a41f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:30:18.559540 kubelet[2505]: E1101 01:30:18.559150 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-rhjss" podUID="438a7b01-7b7b-439d-a5c9-a6d4d681a41f" Nov 1 01:30:18.692000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:30:18.692000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:30:18.692000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0024c9400 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:30:18.692000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c003350740 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:30:18.692000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:30:18.692000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:30:18.692000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:30:18.692000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c003428380 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:30:18.692000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:30:18.693000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:30:18.693000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c003119900 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:30:18.693000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:30:20.150476 env[1561]: time="2025-11-01T01:30:20.150376934Z" level=info msg="StopPodSandbox for \"c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a\"" Nov 1 01:30:20.229101 env[1561]: 2025-11-01 01:30:20.208 [WARNING][5599] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34cd8b9336-k8s-calico--kube--controllers--5bff9f9fd4--4vmq2-eth0", GenerateName:"calico-kube-controllers-5bff9f9fd4-", Namespace:"calico-system", SelfLink:"", UID:"593908a5-f718-4b03-b095-540ff204a4bd", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 29, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bff9f9fd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34cd8b9336", ContainerID:"f1082a4dde71afea80a2f88419bbf26033601c06bb6b4ba4798b5ff16fa12d88", Pod:"calico-kube-controllers-5bff9f9fd4-4vmq2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.114.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali141d6dd5341", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:30:20.229101 env[1561]: 2025-11-01 01:30:20.208 [INFO][5599] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a" Nov 1 01:30:20.229101 env[1561]: 2025-11-01 01:30:20.208 [INFO][5599] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a" iface="eth0" netns="" Nov 1 01:30:20.229101 env[1561]: 2025-11-01 01:30:20.208 [INFO][5599] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a" Nov 1 01:30:20.229101 env[1561]: 2025-11-01 01:30:20.208 [INFO][5599] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a" Nov 1 01:30:20.229101 env[1561]: 2025-11-01 01:30:20.221 [INFO][5617] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a" HandleID="k8s-pod-network.c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a" Workload="ci--3510.3.8--n--34cd8b9336-k8s-calico--kube--controllers--5bff9f9fd4--4vmq2-eth0" Nov 1 01:30:20.229101 env[1561]: 2025-11-01 01:30:20.221 [INFO][5617] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:30:20.229101 env[1561]: 2025-11-01 01:30:20.221 [INFO][5617] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:30:20.229101 env[1561]: 2025-11-01 01:30:20.226 [WARNING][5617] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a" HandleID="k8s-pod-network.c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a" Workload="ci--3510.3.8--n--34cd8b9336-k8s-calico--kube--controllers--5bff9f9fd4--4vmq2-eth0" Nov 1 01:30:20.229101 env[1561]: 2025-11-01 01:30:20.226 [INFO][5617] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a" HandleID="k8s-pod-network.c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a" Workload="ci--3510.3.8--n--34cd8b9336-k8s-calico--kube--controllers--5bff9f9fd4--4vmq2-eth0" Nov 1 01:30:20.229101 env[1561]: 2025-11-01 01:30:20.227 [INFO][5617] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:30:20.229101 env[1561]: 2025-11-01 01:30:20.228 [INFO][5599] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a" Nov 1 01:30:20.229619 env[1561]: time="2025-11-01T01:30:20.229125058Z" level=info msg="TearDown network for sandbox \"c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a\" successfully" Nov 1 01:30:20.229619 env[1561]: time="2025-11-01T01:30:20.229149764Z" level=info msg="StopPodSandbox for \"c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a\" returns successfully" Nov 1 01:30:20.229619 env[1561]: time="2025-11-01T01:30:20.229564575Z" level=info msg="RemovePodSandbox for \"c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a\"" Nov 1 01:30:20.229737 env[1561]: time="2025-11-01T01:30:20.229596198Z" level=info msg="Forcibly stopping sandbox \"c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a\"" Nov 1 01:30:20.270029 env[1561]: 2025-11-01 01:30:20.250 [WARNING][5638] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34cd8b9336-k8s-calico--kube--controllers--5bff9f9fd4--4vmq2-eth0", GenerateName:"calico-kube-controllers-5bff9f9fd4-", Namespace:"calico-system", SelfLink:"", UID:"593908a5-f718-4b03-b095-540ff204a4bd", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 29, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bff9f9fd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34cd8b9336", ContainerID:"f1082a4dde71afea80a2f88419bbf26033601c06bb6b4ba4798b5ff16fa12d88", Pod:"calico-kube-controllers-5bff9f9fd4-4vmq2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.114.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali141d6dd5341", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:30:20.270029 env[1561]: 2025-11-01 01:30:20.250 [INFO][5638] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a" Nov 1 01:30:20.270029 env[1561]: 2025-11-01 01:30:20.250 [INFO][5638] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a" iface="eth0" netns="" Nov 1 01:30:20.270029 env[1561]: 2025-11-01 01:30:20.250 [INFO][5638] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a" Nov 1 01:30:20.270029 env[1561]: 2025-11-01 01:30:20.250 [INFO][5638] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a" Nov 1 01:30:20.270029 env[1561]: 2025-11-01 01:30:20.262 [INFO][5653] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a" HandleID="k8s-pod-network.c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a" Workload="ci--3510.3.8--n--34cd8b9336-k8s-calico--kube--controllers--5bff9f9fd4--4vmq2-eth0" Nov 1 01:30:20.270029 env[1561]: 2025-11-01 01:30:20.262 [INFO][5653] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:30:20.270029 env[1561]: 2025-11-01 01:30:20.262 [INFO][5653] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:30:20.270029 env[1561]: 2025-11-01 01:30:20.267 [WARNING][5653] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a" HandleID="k8s-pod-network.c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a" Workload="ci--3510.3.8--n--34cd8b9336-k8s-calico--kube--controllers--5bff9f9fd4--4vmq2-eth0" Nov 1 01:30:20.270029 env[1561]: 2025-11-01 01:30:20.267 [INFO][5653] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a" HandleID="k8s-pod-network.c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a" Workload="ci--3510.3.8--n--34cd8b9336-k8s-calico--kube--controllers--5bff9f9fd4--4vmq2-eth0" Nov 1 01:30:20.270029 env[1561]: 2025-11-01 01:30:20.268 [INFO][5653] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:30:20.270029 env[1561]: 2025-11-01 01:30:20.269 [INFO][5638] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a" Nov 1 01:30:20.270445 env[1561]: time="2025-11-01T01:30:20.270056845Z" level=info msg="TearDown network for sandbox \"c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a\" successfully" Nov 1 01:30:20.271945 env[1561]: time="2025-11-01T01:30:20.271905015Z" level=info msg="RemovePodSandbox \"c1f5d14566c1f92413d133232bab9723698b6dba6fc8e71efdaf27ed14586e3a\" returns successfully" Nov 1 01:30:20.272279 env[1561]: time="2025-11-01T01:30:20.272260431Z" level=info msg="StopPodSandbox for \"4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9\"" Nov 1 01:30:20.314161 env[1561]: 2025-11-01 01:30:20.292 [WARNING][5678] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--lqwd9-eth0", GenerateName:"calico-apiserver-7b6cfc8885-", Namespace:"calico-apiserver", SelfLink:"", UID:"cb08aa02-32db-4371-b5cc-c9a5a7fd22c8", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 29, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b6cfc8885", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34cd8b9336", ContainerID:"c328ddc571e0f38942770b2caad9a329e0a9278e27e86b612d1dea63cb650986", Pod:"calico-apiserver-7b6cfc8885-lqwd9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0e36c40109f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:30:20.314161 env[1561]: 2025-11-01 01:30:20.293 [INFO][5678] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9" Nov 1 01:30:20.314161 env[1561]: 2025-11-01 01:30:20.293 [INFO][5678] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9" iface="eth0" netns="" Nov 1 01:30:20.314161 env[1561]: 2025-11-01 01:30:20.293 [INFO][5678] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9" Nov 1 01:30:20.314161 env[1561]: 2025-11-01 01:30:20.293 [INFO][5678] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9" Nov 1 01:30:20.314161 env[1561]: 2025-11-01 01:30:20.306 [INFO][5695] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9" HandleID="k8s-pod-network.4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9" Workload="ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--lqwd9-eth0" Nov 1 01:30:20.314161 env[1561]: 2025-11-01 01:30:20.306 [INFO][5695] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:30:20.314161 env[1561]: 2025-11-01 01:30:20.306 [INFO][5695] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:30:20.314161 env[1561]: 2025-11-01 01:30:20.310 [WARNING][5695] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9" HandleID="k8s-pod-network.4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9" Workload="ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--lqwd9-eth0" Nov 1 01:30:20.314161 env[1561]: 2025-11-01 01:30:20.311 [INFO][5695] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9" HandleID="k8s-pod-network.4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9" Workload="ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--lqwd9-eth0" Nov 1 01:30:20.314161 env[1561]: 2025-11-01 01:30:20.312 [INFO][5695] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:30:20.314161 env[1561]: 2025-11-01 01:30:20.313 [INFO][5678] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9" Nov 1 01:30:20.314577 env[1561]: time="2025-11-01T01:30:20.314182144Z" level=info msg="TearDown network for sandbox \"4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9\" successfully" Nov 1 01:30:20.314577 env[1561]: time="2025-11-01T01:30:20.314207108Z" level=info msg="StopPodSandbox for \"4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9\" returns successfully" Nov 1 01:30:20.314577 env[1561]: time="2025-11-01T01:30:20.314523659Z" level=info msg="RemovePodSandbox for \"4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9\"" Nov 1 01:30:20.314649 env[1561]: time="2025-11-01T01:30:20.314554939Z" level=info msg="Forcibly stopping sandbox \"4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9\"" Nov 1 01:30:20.358495 env[1561]: 2025-11-01 01:30:20.341 [WARNING][5723] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--lqwd9-eth0", GenerateName:"calico-apiserver-7b6cfc8885-", Namespace:"calico-apiserver", SelfLink:"", UID:"cb08aa02-32db-4371-b5cc-c9a5a7fd22c8", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 29, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b6cfc8885", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34cd8b9336", ContainerID:"c328ddc571e0f38942770b2caad9a329e0a9278e27e86b612d1dea63cb650986", Pod:"calico-apiserver-7b6cfc8885-lqwd9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0e36c40109f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:30:20.358495 env[1561]: 2025-11-01 01:30:20.341 [INFO][5723] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9" Nov 1 01:30:20.358495 env[1561]: 2025-11-01 01:30:20.341 [INFO][5723] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9" iface="eth0" netns="" Nov 1 01:30:20.358495 env[1561]: 2025-11-01 01:30:20.341 [INFO][5723] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9" Nov 1 01:30:20.358495 env[1561]: 2025-11-01 01:30:20.341 [INFO][5723] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9" Nov 1 01:30:20.358495 env[1561]: 2025-11-01 01:30:20.351 [INFO][5737] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9" HandleID="k8s-pod-network.4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9" Workload="ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--lqwd9-eth0" Nov 1 01:30:20.358495 env[1561]: 2025-11-01 01:30:20.351 [INFO][5737] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:30:20.358495 env[1561]: 2025-11-01 01:30:20.351 [INFO][5737] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:30:20.358495 env[1561]: 2025-11-01 01:30:20.355 [WARNING][5737] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9" HandleID="k8s-pod-network.4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9" Workload="ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--lqwd9-eth0" Nov 1 01:30:20.358495 env[1561]: 2025-11-01 01:30:20.355 [INFO][5737] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9" HandleID="k8s-pod-network.4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9" Workload="ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--lqwd9-eth0" Nov 1 01:30:20.358495 env[1561]: 2025-11-01 01:30:20.356 [INFO][5737] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:30:20.358495 env[1561]: 2025-11-01 01:30:20.357 [INFO][5723] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9" Nov 1 01:30:20.358495 env[1561]: time="2025-11-01T01:30:20.358486957Z" level=info msg="TearDown network for sandbox \"4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9\" successfully" Nov 1 01:30:20.378124 env[1561]: time="2025-11-01T01:30:20.378012752Z" level=info msg="RemovePodSandbox \"4be5ed106eec3720ce01b0c98adae95c31cf71328a906d5a6af4332881d611a9\" returns successfully" Nov 1 01:30:20.379042 env[1561]: time="2025-11-01T01:30:20.378908011Z" level=info msg="StopPodSandbox for \"72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac\"" Nov 1 01:30:20.455296 env[1561]: 2025-11-01 01:30:20.432 [WARNING][5761] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--qqpf5-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"4c87fc13-f2aa-4700-9b99-82cf119d8f7d", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 29, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34cd8b9336", ContainerID:"941b71ba749354271380599cda33950df6594414a4a18688e5f024ad833fe275", Pod:"coredns-66bc5c9577-qqpf5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib63b4c09861", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:30:20.455296 env[1561]: 2025-11-01 01:30:20.432 [INFO][5761] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac" Nov 1 01:30:20.455296 env[1561]: 2025-11-01 01:30:20.432 [INFO][5761] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac" iface="eth0" netns="" Nov 1 01:30:20.455296 env[1561]: 2025-11-01 01:30:20.432 [INFO][5761] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac" Nov 1 01:30:20.455296 env[1561]: 2025-11-01 01:30:20.432 [INFO][5761] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac" Nov 1 01:30:20.455296 env[1561]: 2025-11-01 01:30:20.446 [INFO][5778] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac" HandleID="k8s-pod-network.72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac" Workload="ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--qqpf5-eth0" Nov 1 01:30:20.455296 env[1561]: 2025-11-01 01:30:20.446 [INFO][5778] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:30:20.455296 env[1561]: 2025-11-01 01:30:20.446 [INFO][5778] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:30:20.455296 env[1561]: 2025-11-01 01:30:20.451 [WARNING][5778] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac" HandleID="k8s-pod-network.72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac" Workload="ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--qqpf5-eth0" Nov 1 01:30:20.455296 env[1561]: 2025-11-01 01:30:20.451 [INFO][5778] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac" HandleID="k8s-pod-network.72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac" Workload="ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--qqpf5-eth0" Nov 1 01:30:20.455296 env[1561]: 2025-11-01 01:30:20.453 [INFO][5778] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:30:20.455296 env[1561]: 2025-11-01 01:30:20.454 [INFO][5761] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac" Nov 1 01:30:20.455769 env[1561]: time="2025-11-01T01:30:20.455301524Z" level=info msg="TearDown network for sandbox \"72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac\" successfully" Nov 1 01:30:20.455769 env[1561]: time="2025-11-01T01:30:20.455333232Z" level=info msg="StopPodSandbox for \"72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac\" returns successfully" Nov 1 01:30:20.455769 env[1561]: time="2025-11-01T01:30:20.455697109Z" level=info msg="RemovePodSandbox for \"72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac\"" Nov 1 01:30:20.455769 env[1561]: time="2025-11-01T01:30:20.455727745Z" level=info msg="Forcibly stopping sandbox \"72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac\"" Nov 1 01:30:20.493800 env[1561]: 2025-11-01 01:30:20.477 [WARNING][5804] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--qqpf5-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"4c87fc13-f2aa-4700-9b99-82cf119d8f7d", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 29, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34cd8b9336", ContainerID:"941b71ba749354271380599cda33950df6594414a4a18688e5f024ad833fe275", Pod:"coredns-66bc5c9577-qqpf5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib63b4c09861", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:30:20.493800 env[1561]: 2025-11-01 01:30:20.477 [INFO][5804] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac" Nov 1 01:30:20.493800 env[1561]: 2025-11-01 01:30:20.477 [INFO][5804] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac" iface="eth0" netns="" Nov 1 01:30:20.493800 env[1561]: 2025-11-01 01:30:20.477 [INFO][5804] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac" Nov 1 01:30:20.493800 env[1561]: 2025-11-01 01:30:20.477 [INFO][5804] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac" Nov 1 01:30:20.493800 env[1561]: 2025-11-01 01:30:20.486 [INFO][5819] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac" HandleID="k8s-pod-network.72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac" Workload="ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--qqpf5-eth0" Nov 1 01:30:20.493800 env[1561]: 2025-11-01 01:30:20.486 [INFO][5819] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:30:20.493800 env[1561]: 2025-11-01 01:30:20.486 [INFO][5819] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:30:20.493800 env[1561]: 2025-11-01 01:30:20.491 [WARNING][5819] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac" HandleID="k8s-pod-network.72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac" Workload="ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--qqpf5-eth0" Nov 1 01:30:20.493800 env[1561]: 2025-11-01 01:30:20.491 [INFO][5819] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac" HandleID="k8s-pod-network.72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac" Workload="ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--qqpf5-eth0" Nov 1 01:30:20.493800 env[1561]: 2025-11-01 01:30:20.492 [INFO][5819] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:30:20.493800 env[1561]: 2025-11-01 01:30:20.493 [INFO][5804] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac" Nov 1 01:30:20.494140 env[1561]: time="2025-11-01T01:30:20.493793613Z" level=info msg="TearDown network for sandbox \"72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac\" successfully" Nov 1 01:30:20.495135 env[1561]: time="2025-11-01T01:30:20.495099522Z" level=info msg="RemovePodSandbox \"72a38db821cf57c8e3e671ec2d28f8bd902692615d72e5c12ed5ef0a5e0573ac\" returns successfully" Nov 1 01:30:20.495419 env[1561]: time="2025-11-01T01:30:20.495378882Z" level=info msg="StopPodSandbox for \"43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e\"" Nov 1 01:30:20.531953 env[1561]: 2025-11-01 01:30:20.514 [WARNING][5844] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--54j2g-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"c8ca38b0-d8b7-4714-8abe-3e911e8eec29", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 29, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34cd8b9336", ContainerID:"26d88e78bdb7dde573cd726943ecdf02363a83ec24b13e8889ef3e388368c725", Pod:"coredns-66bc5c9577-54j2g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieb77a8bad17", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:30:20.531953 env[1561]: 2025-11-01 01:30:20.514 [INFO][5844] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e" Nov 1 01:30:20.531953 env[1561]: 2025-11-01 01:30:20.514 [INFO][5844] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e" iface="eth0" netns="" Nov 1 01:30:20.531953 env[1561]: 2025-11-01 01:30:20.514 [INFO][5844] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e" Nov 1 01:30:20.531953 env[1561]: 2025-11-01 01:30:20.514 [INFO][5844] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e" Nov 1 01:30:20.531953 env[1561]: 2025-11-01 01:30:20.523 [INFO][5862] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e" HandleID="k8s-pod-network.43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e" Workload="ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--54j2g-eth0" Nov 1 01:30:20.531953 env[1561]: 2025-11-01 01:30:20.523 [INFO][5862] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:30:20.531953 env[1561]: 2025-11-01 01:30:20.524 [INFO][5862] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:30:20.531953 env[1561]: 2025-11-01 01:30:20.528 [WARNING][5862] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e" HandleID="k8s-pod-network.43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e" Workload="ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--54j2g-eth0" Nov 1 01:30:20.531953 env[1561]: 2025-11-01 01:30:20.528 [INFO][5862] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e" HandleID="k8s-pod-network.43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e" Workload="ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--54j2g-eth0" Nov 1 01:30:20.531953 env[1561]: 2025-11-01 01:30:20.529 [INFO][5862] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:30:20.531953 env[1561]: 2025-11-01 01:30:20.530 [INFO][5844] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e" Nov 1 01:30:20.532370 env[1561]: time="2025-11-01T01:30:20.531976055Z" level=info msg="TearDown network for sandbox \"43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e\" successfully" Nov 1 01:30:20.532370 env[1561]: time="2025-11-01T01:30:20.531995053Z" level=info msg="StopPodSandbox for \"43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e\" returns successfully" Nov 1 01:30:20.532370 env[1561]: time="2025-11-01T01:30:20.532288659Z" level=info msg="RemovePodSandbox for \"43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e\"" Nov 1 01:30:20.532370 env[1561]: time="2025-11-01T01:30:20.532312573Z" level=info msg="Forcibly stopping sandbox \"43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e\"" Nov 1 01:30:20.566284 env[1561]: 2025-11-01 01:30:20.549 [WARNING][5888] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--54j2g-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"c8ca38b0-d8b7-4714-8abe-3e911e8eec29", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 29, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34cd8b9336", ContainerID:"26d88e78bdb7dde573cd726943ecdf02363a83ec24b13e8889ef3e388368c725", Pod:"coredns-66bc5c9577-54j2g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieb77a8bad17", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:30:20.566284 env[1561]: 2025-11-01 01:30:20.549 [INFO][5888] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e" Nov 1 01:30:20.566284 env[1561]: 2025-11-01 01:30:20.549 [INFO][5888] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e" iface="eth0" netns="" Nov 1 01:30:20.566284 env[1561]: 2025-11-01 01:30:20.549 [INFO][5888] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e" Nov 1 01:30:20.566284 env[1561]: 2025-11-01 01:30:20.549 [INFO][5888] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e" Nov 1 01:30:20.566284 env[1561]: 2025-11-01 01:30:20.559 [INFO][5904] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e" HandleID="k8s-pod-network.43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e" Workload="ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--54j2g-eth0" Nov 1 01:30:20.566284 env[1561]: 2025-11-01 01:30:20.559 [INFO][5904] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:30:20.566284 env[1561]: 2025-11-01 01:30:20.559 [INFO][5904] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:30:20.566284 env[1561]: 2025-11-01 01:30:20.563 [WARNING][5904] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e" HandleID="k8s-pod-network.43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e" Workload="ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--54j2g-eth0" Nov 1 01:30:20.566284 env[1561]: 2025-11-01 01:30:20.563 [INFO][5904] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e" HandleID="k8s-pod-network.43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e" Workload="ci--3510.3.8--n--34cd8b9336-k8s-coredns--66bc5c9577--54j2g-eth0" Nov 1 01:30:20.566284 env[1561]: 2025-11-01 01:30:20.564 [INFO][5904] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:30:20.566284 env[1561]: 2025-11-01 01:30:20.565 [INFO][5888] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e" Nov 1 01:30:20.566761 env[1561]: time="2025-11-01T01:30:20.566305330Z" level=info msg="TearDown network for sandbox \"43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e\" successfully" Nov 1 01:30:20.567782 env[1561]: time="2025-11-01T01:30:20.567769187Z" level=info msg="RemovePodSandbox \"43a8cb25742252138fff0374c1108ad721a2add65efc51f4690c612b067e549e\" returns successfully" Nov 1 01:30:20.568066 env[1561]: time="2025-11-01T01:30:20.568025300Z" level=info msg="StopPodSandbox for \"a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102\"" Nov 1 01:30:20.603161 env[1561]: 2025-11-01 01:30:20.585 [WARNING][5924] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34cd8b9336-k8s-goldmane--7c778bb748--6xhxh-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"9aac0066-097a-4582-8ce8-a3a1ddb41b3d", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 29, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34cd8b9336", ContainerID:"3c3be7ec870339318426b24c09da679d9b235cadbdd1038b2586d3b5b91d9f26", Pod:"goldmane-7c778bb748-6xhxh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.114.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib3833dab0fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:30:20.603161 env[1561]: 2025-11-01 01:30:20.585 [INFO][5924] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102" Nov 1 01:30:20.603161 env[1561]: 2025-11-01 01:30:20.585 [INFO][5924] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102" iface="eth0" netns="" Nov 1 01:30:20.603161 env[1561]: 2025-11-01 01:30:20.585 [INFO][5924] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102" Nov 1 01:30:20.603161 env[1561]: 2025-11-01 01:30:20.585 [INFO][5924] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102" Nov 1 01:30:20.603161 env[1561]: 2025-11-01 01:30:20.595 [INFO][5938] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102" HandleID="k8s-pod-network.a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102" Workload="ci--3510.3.8--n--34cd8b9336-k8s-goldmane--7c778bb748--6xhxh-eth0" Nov 1 01:30:20.603161 env[1561]: 2025-11-01 01:30:20.595 [INFO][5938] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:30:20.603161 env[1561]: 2025-11-01 01:30:20.595 [INFO][5938] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:30:20.603161 env[1561]: 2025-11-01 01:30:20.600 [WARNING][5938] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102" HandleID="k8s-pod-network.a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102" Workload="ci--3510.3.8--n--34cd8b9336-k8s-goldmane--7c778bb748--6xhxh-eth0" Nov 1 01:30:20.603161 env[1561]: 2025-11-01 01:30:20.600 [INFO][5938] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102" HandleID="k8s-pod-network.a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102" Workload="ci--3510.3.8--n--34cd8b9336-k8s-goldmane--7c778bb748--6xhxh-eth0" Nov 1 01:30:20.603161 env[1561]: 2025-11-01 01:30:20.601 [INFO][5938] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:30:20.603161 env[1561]: 2025-11-01 01:30:20.602 [INFO][5924] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102" Nov 1 01:30:20.603609 env[1561]: time="2025-11-01T01:30:20.603191803Z" level=info msg="TearDown network for sandbox \"a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102\" successfully" Nov 1 01:30:20.603609 env[1561]: time="2025-11-01T01:30:20.603223906Z" level=info msg="StopPodSandbox for \"a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102\" returns successfully" Nov 1 01:30:20.603609 env[1561]: time="2025-11-01T01:30:20.603557330Z" level=info msg="RemovePodSandbox for \"a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102\"" Nov 1 01:30:20.603609 env[1561]: time="2025-11-01T01:30:20.603581736Z" level=info msg="Forcibly stopping sandbox \"a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102\"" Nov 1 01:30:20.643546 env[1561]: 2025-11-01 01:30:20.624 [WARNING][5962] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34cd8b9336-k8s-goldmane--7c778bb748--6xhxh-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"9aac0066-097a-4582-8ce8-a3a1ddb41b3d", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 29, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34cd8b9336", ContainerID:"3c3be7ec870339318426b24c09da679d9b235cadbdd1038b2586d3b5b91d9f26", Pod:"goldmane-7c778bb748-6xhxh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.114.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib3833dab0fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:30:20.643546 env[1561]: 2025-11-01 01:30:20.624 [INFO][5962] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102" Nov 1 01:30:20.643546 env[1561]: 2025-11-01 01:30:20.624 [INFO][5962] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102" iface="eth0" netns="" Nov 1 01:30:20.643546 env[1561]: 2025-11-01 01:30:20.624 [INFO][5962] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102" Nov 1 01:30:20.643546 env[1561]: 2025-11-01 01:30:20.625 [INFO][5962] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102" Nov 1 01:30:20.643546 env[1561]: 2025-11-01 01:30:20.635 [INFO][5980] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102" HandleID="k8s-pod-network.a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102" Workload="ci--3510.3.8--n--34cd8b9336-k8s-goldmane--7c778bb748--6xhxh-eth0" Nov 1 01:30:20.643546 env[1561]: 2025-11-01 01:30:20.635 [INFO][5980] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:30:20.643546 env[1561]: 2025-11-01 01:30:20.635 [INFO][5980] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:30:20.643546 env[1561]: 2025-11-01 01:30:20.640 [WARNING][5980] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102" HandleID="k8s-pod-network.a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102" Workload="ci--3510.3.8--n--34cd8b9336-k8s-goldmane--7c778bb748--6xhxh-eth0" Nov 1 01:30:20.643546 env[1561]: 2025-11-01 01:30:20.640 [INFO][5980] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102" HandleID="k8s-pod-network.a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102" Workload="ci--3510.3.8--n--34cd8b9336-k8s-goldmane--7c778bb748--6xhxh-eth0" Nov 1 01:30:20.643546 env[1561]: 2025-11-01 01:30:20.641 [INFO][5980] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:30:20.643546 env[1561]: 2025-11-01 01:30:20.642 [INFO][5962] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102" Nov 1 01:30:20.643946 env[1561]: time="2025-11-01T01:30:20.643571040Z" level=info msg="TearDown network for sandbox \"a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102\" successfully" Nov 1 01:30:20.645208 env[1561]: time="2025-11-01T01:30:20.645190161Z" level=info msg="RemovePodSandbox \"a775011d34bf92326975016df9dfb7a01716c0dae10d725140c57c2c18d19102\" returns successfully" Nov 1 01:30:20.645530 env[1561]: time="2025-11-01T01:30:20.645481637Z" level=info msg="StopPodSandbox for \"e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f\"" Nov 1 01:30:20.684736 env[1561]: 2025-11-01 01:30:20.666 [WARNING][6005] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--rhjss-eth0", GenerateName:"calico-apiserver-7b6cfc8885-", Namespace:"calico-apiserver", SelfLink:"", UID:"438a7b01-7b7b-439d-a5c9-a6d4d681a41f", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 29, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b6cfc8885", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34cd8b9336", ContainerID:"d5277884564c3c093fb6fdc69e866326bf49a9a2a0e59252cd541875fd6e36bd", Pod:"calico-apiserver-7b6cfc8885-rhjss", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7a7f4d59257", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:30:20.684736 env[1561]: 2025-11-01 01:30:20.666 [INFO][6005] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f" Nov 1 01:30:20.684736 env[1561]: 2025-11-01 01:30:20.666 [INFO][6005] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f" iface="eth0" netns="" Nov 1 01:30:20.684736 env[1561]: 2025-11-01 01:30:20.666 [INFO][6005] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f" Nov 1 01:30:20.684736 env[1561]: 2025-11-01 01:30:20.666 [INFO][6005] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f" Nov 1 01:30:20.684736 env[1561]: 2025-11-01 01:30:20.677 [INFO][6022] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f" HandleID="k8s-pod-network.e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f" Workload="ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--rhjss-eth0" Nov 1 01:30:20.684736 env[1561]: 2025-11-01 01:30:20.677 [INFO][6022] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:30:20.684736 env[1561]: 2025-11-01 01:30:20.678 [INFO][6022] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:30:20.684736 env[1561]: 2025-11-01 01:30:20.682 [WARNING][6022] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f" HandleID="k8s-pod-network.e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f" Workload="ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--rhjss-eth0" Nov 1 01:30:20.684736 env[1561]: 2025-11-01 01:30:20.682 [INFO][6022] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f" HandleID="k8s-pod-network.e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f" Workload="ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--rhjss-eth0" Nov 1 01:30:20.684736 env[1561]: 2025-11-01 01:30:20.683 [INFO][6022] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:30:20.684736 env[1561]: 2025-11-01 01:30:20.683 [INFO][6005] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f" Nov 1 01:30:20.685131 env[1561]: time="2025-11-01T01:30:20.684751256Z" level=info msg="TearDown network for sandbox \"e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f\" successfully" Nov 1 01:30:20.685131 env[1561]: time="2025-11-01T01:30:20.684771941Z" level=info msg="StopPodSandbox for \"e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f\" returns successfully" Nov 1 01:30:20.685131 env[1561]: time="2025-11-01T01:30:20.685073080Z" level=info msg="RemovePodSandbox for \"e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f\"" Nov 1 01:30:20.685131 env[1561]: time="2025-11-01T01:30:20.685094823Z" level=info msg="Forcibly stopping sandbox \"e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f\"" Nov 1 01:30:20.718812 env[1561]: 2025-11-01 01:30:20.702 [WARNING][6047] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--rhjss-eth0", GenerateName:"calico-apiserver-7b6cfc8885-", Namespace:"calico-apiserver", SelfLink:"", UID:"438a7b01-7b7b-439d-a5c9-a6d4d681a41f", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 29, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b6cfc8885", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34cd8b9336", ContainerID:"d5277884564c3c093fb6fdc69e866326bf49a9a2a0e59252cd541875fd6e36bd", Pod:"calico-apiserver-7b6cfc8885-rhjss", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7a7f4d59257", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:30:20.718812 env[1561]: 2025-11-01 01:30:20.702 [INFO][6047] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f" Nov 1 01:30:20.718812 env[1561]: 2025-11-01 01:30:20.702 [INFO][6047] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f" iface="eth0" netns="" Nov 1 01:30:20.718812 env[1561]: 2025-11-01 01:30:20.702 [INFO][6047] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f" Nov 1 01:30:20.718812 env[1561]: 2025-11-01 01:30:20.702 [INFO][6047] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f" Nov 1 01:30:20.718812 env[1561]: 2025-11-01 01:30:20.712 [INFO][6063] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f" HandleID="k8s-pod-network.e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f" Workload="ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--rhjss-eth0" Nov 1 01:30:20.718812 env[1561]: 2025-11-01 01:30:20.712 [INFO][6063] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:30:20.718812 env[1561]: 2025-11-01 01:30:20.712 [INFO][6063] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:30:20.718812 env[1561]: 2025-11-01 01:30:20.716 [WARNING][6063] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f" HandleID="k8s-pod-network.e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f" Workload="ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--rhjss-eth0" Nov 1 01:30:20.718812 env[1561]: 2025-11-01 01:30:20.716 [INFO][6063] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f" HandleID="k8s-pod-network.e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f" Workload="ci--3510.3.8--n--34cd8b9336-k8s-calico--apiserver--7b6cfc8885--rhjss-eth0" Nov 1 01:30:20.718812 env[1561]: 2025-11-01 01:30:20.717 [INFO][6063] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:30:20.718812 env[1561]: 2025-11-01 01:30:20.717 [INFO][6047] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f" Nov 1 01:30:20.718812 env[1561]: time="2025-11-01T01:30:20.718767821Z" level=info msg="TearDown network for sandbox \"e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f\" successfully" Nov 1 01:30:20.720299 env[1561]: time="2025-11-01T01:30:20.720258190Z" level=info msg="RemovePodSandbox \"e7d9b58f4f708a7630ed462bfab7e5136d66c80645918bc2d6d3d8cf646a9d4f\" returns successfully" Nov 1 01:30:20.720598 env[1561]: time="2025-11-01T01:30:20.720553513Z" level=info msg="StopPodSandbox for \"6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7\"" Nov 1 01:30:20.774485 env[1561]: 2025-11-01 01:30:20.757 [WARNING][6086] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34cd8b9336-k8s-csi--node--driver--9wz7k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"79df0ba2-6e86-422c-8f93-652dfb942b69", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 29, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34cd8b9336", ContainerID:"95b233e8b63b35cb3bea9c5be7d405b67e7b248db1e04a93cca0d1e0618226bf", Pod:"csi-node-driver-9wz7k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali35a2f823f05", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:30:20.774485 env[1561]: 2025-11-01 01:30:20.757 [INFO][6086] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7" Nov 1 01:30:20.774485 env[1561]: 2025-11-01 01:30:20.757 [INFO][6086] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7" iface="eth0" netns="" Nov 1 01:30:20.774485 env[1561]: 2025-11-01 01:30:20.757 [INFO][6086] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7" Nov 1 01:30:20.774485 env[1561]: 2025-11-01 01:30:20.757 [INFO][6086] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7" Nov 1 01:30:20.774485 env[1561]: 2025-11-01 01:30:20.766 [INFO][6097] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7" HandleID="k8s-pod-network.6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7" Workload="ci--3510.3.8--n--34cd8b9336-k8s-csi--node--driver--9wz7k-eth0" Nov 1 01:30:20.774485 env[1561]: 2025-11-01 01:30:20.766 [INFO][6097] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:30:20.774485 env[1561]: 2025-11-01 01:30:20.766 [INFO][6097] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:30:20.774485 env[1561]: 2025-11-01 01:30:20.770 [WARNING][6097] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7" HandleID="k8s-pod-network.6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7" Workload="ci--3510.3.8--n--34cd8b9336-k8s-csi--node--driver--9wz7k-eth0" Nov 1 01:30:20.774485 env[1561]: 2025-11-01 01:30:20.770 [INFO][6097] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7" HandleID="k8s-pod-network.6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7" Workload="ci--3510.3.8--n--34cd8b9336-k8s-csi--node--driver--9wz7k-eth0" Nov 1 01:30:20.774485 env[1561]: 2025-11-01 01:30:20.772 [INFO][6097] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:30:20.774485 env[1561]: 2025-11-01 01:30:20.773 [INFO][6086] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7" Nov 1 01:30:20.774885 env[1561]: time="2025-11-01T01:30:20.774508519Z" level=info msg="TearDown network for sandbox \"6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7\" successfully" Nov 1 01:30:20.774885 env[1561]: time="2025-11-01T01:30:20.774528861Z" level=info msg="StopPodSandbox for \"6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7\" returns successfully" Nov 1 01:30:20.774885 env[1561]: time="2025-11-01T01:30:20.774814962Z" level=info msg="RemovePodSandbox for \"6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7\"" Nov 1 01:30:20.774885 env[1561]: time="2025-11-01T01:30:20.774838679Z" level=info msg="Forcibly stopping sandbox \"6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7\"" Nov 1 01:30:20.809787 env[1561]: 2025-11-01 01:30:20.792 [WARNING][6124] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--34cd8b9336-k8s-csi--node--driver--9wz7k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"79df0ba2-6e86-422c-8f93-652dfb942b69", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 29, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-34cd8b9336", ContainerID:"95b233e8b63b35cb3bea9c5be7d405b67e7b248db1e04a93cca0d1e0618226bf", Pod:"csi-node-driver-9wz7k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali35a2f823f05", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:30:20.809787 env[1561]: 2025-11-01 01:30:20.792 [INFO][6124] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7" Nov 1 01:30:20.809787 env[1561]: 2025-11-01 01:30:20.792 [INFO][6124] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7" iface="eth0" netns="" Nov 1 01:30:20.809787 env[1561]: 2025-11-01 01:30:20.792 [INFO][6124] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7" Nov 1 01:30:20.809787 env[1561]: 2025-11-01 01:30:20.792 [INFO][6124] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7" Nov 1 01:30:20.809787 env[1561]: 2025-11-01 01:30:20.802 [INFO][6140] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7" HandleID="k8s-pod-network.6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7" Workload="ci--3510.3.8--n--34cd8b9336-k8s-csi--node--driver--9wz7k-eth0" Nov 1 01:30:20.809787 env[1561]: 2025-11-01 01:30:20.802 [INFO][6140] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:30:20.809787 env[1561]: 2025-11-01 01:30:20.802 [INFO][6140] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:30:20.809787 env[1561]: 2025-11-01 01:30:20.806 [WARNING][6140] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7" HandleID="k8s-pod-network.6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7" Workload="ci--3510.3.8--n--34cd8b9336-k8s-csi--node--driver--9wz7k-eth0" Nov 1 01:30:20.809787 env[1561]: 2025-11-01 01:30:20.806 [INFO][6140] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7" HandleID="k8s-pod-network.6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7" Workload="ci--3510.3.8--n--34cd8b9336-k8s-csi--node--driver--9wz7k-eth0" Nov 1 01:30:20.809787 env[1561]: 2025-11-01 01:30:20.808 [INFO][6140] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:30:20.809787 env[1561]: 2025-11-01 01:30:20.808 [INFO][6124] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7" Nov 1 01:30:20.809787 env[1561]: time="2025-11-01T01:30:20.809775341Z" level=info msg="TearDown network for sandbox \"6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7\" successfully" Nov 1 01:30:20.811111 env[1561]: time="2025-11-01T01:30:20.811070159Z" level=info msg="RemovePodSandbox \"6e4ef090bf41ea4b91ed88445a6ddfa41fecf36a3a355e363965e813646e9ba7\" returns successfully" Nov 1 01:30:20.811391 env[1561]: time="2025-11-01T01:30:20.811379402Z" level=info msg="StopPodSandbox for \"17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c\"" Nov 1 01:30:20.844627 env[1561]: 2025-11-01 01:30:20.828 [WARNING][6165] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-whisker--7875cf6866--bnfcp-eth0" Nov 1 01:30:20.844627 env[1561]: 2025-11-01 01:30:20.829 [INFO][6165] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c" Nov 1 01:30:20.844627 env[1561]: 2025-11-01 01:30:20.829 [INFO][6165] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c" iface="eth0" netns="" Nov 1 01:30:20.844627 env[1561]: 2025-11-01 01:30:20.829 [INFO][6165] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c" Nov 1 01:30:20.844627 env[1561]: 2025-11-01 01:30:20.829 [INFO][6165] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c" Nov 1 01:30:20.844627 env[1561]: 2025-11-01 01:30:20.838 [INFO][6182] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c" HandleID="k8s-pod-network.17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c" Workload="ci--3510.3.8--n--34cd8b9336-k8s-whisker--7875cf6866--bnfcp-eth0" Nov 1 01:30:20.844627 env[1561]: 2025-11-01 01:30:20.838 [INFO][6182] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:30:20.844627 env[1561]: 2025-11-01 01:30:20.838 [INFO][6182] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:30:20.844627 env[1561]: 2025-11-01 01:30:20.842 [WARNING][6182] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c" HandleID="k8s-pod-network.17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c" Workload="ci--3510.3.8--n--34cd8b9336-k8s-whisker--7875cf6866--bnfcp-eth0" Nov 1 01:30:20.844627 env[1561]: 2025-11-01 01:30:20.842 [INFO][6182] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c" HandleID="k8s-pod-network.17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c" Workload="ci--3510.3.8--n--34cd8b9336-k8s-whisker--7875cf6866--bnfcp-eth0" Nov 1 01:30:20.844627 env[1561]: 2025-11-01 01:30:20.843 [INFO][6182] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:30:20.844627 env[1561]: 2025-11-01 01:30:20.843 [INFO][6165] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c" Nov 1 01:30:20.844918 env[1561]: time="2025-11-01T01:30:20.844648503Z" level=info msg="TearDown network for sandbox \"17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c\" successfully" Nov 1 01:30:20.844918 env[1561]: time="2025-11-01T01:30:20.844672515Z" level=info msg="StopPodSandbox for \"17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c\" returns successfully" Nov 1 01:30:20.844974 env[1561]: time="2025-11-01T01:30:20.844962069Z" level=info msg="RemovePodSandbox for \"17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c\"" Nov 1 01:30:20.844999 env[1561]: time="2025-11-01T01:30:20.844980314Z" level=info msg="Forcibly stopping sandbox \"17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c\"" Nov 1 01:30:20.878432 env[1561]: 2025-11-01 01:30:20.861 [WARNING][6208] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c" WorkloadEndpoint="ci--3510.3.8--n--34cd8b9336-k8s-whisker--7875cf6866--bnfcp-eth0" Nov 1 01:30:20.878432 env[1561]: 2025-11-01 01:30:20.861 [INFO][6208] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c" Nov 1 01:30:20.878432 env[1561]: 2025-11-01 01:30:20.861 [INFO][6208] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c" iface="eth0" netns="" Nov 1 01:30:20.878432 env[1561]: 2025-11-01 01:30:20.861 [INFO][6208] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c" Nov 1 01:30:20.878432 env[1561]: 2025-11-01 01:30:20.861 [INFO][6208] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c" Nov 1 01:30:20.878432 env[1561]: 2025-11-01 01:30:20.871 [INFO][6224] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c" HandleID="k8s-pod-network.17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c" Workload="ci--3510.3.8--n--34cd8b9336-k8s-whisker--7875cf6866--bnfcp-eth0" Nov 1 01:30:20.878432 env[1561]: 2025-11-01 01:30:20.872 [INFO][6224] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:30:20.878432 env[1561]: 2025-11-01 01:30:20.872 [INFO][6224] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:30:20.878432 env[1561]: 2025-11-01 01:30:20.876 [WARNING][6224] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c" HandleID="k8s-pod-network.17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c" Workload="ci--3510.3.8--n--34cd8b9336-k8s-whisker--7875cf6866--bnfcp-eth0" Nov 1 01:30:20.878432 env[1561]: 2025-11-01 01:30:20.876 [INFO][6224] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c" HandleID="k8s-pod-network.17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c" Workload="ci--3510.3.8--n--34cd8b9336-k8s-whisker--7875cf6866--bnfcp-eth0" Nov 1 01:30:20.878432 env[1561]: 2025-11-01 01:30:20.877 [INFO][6224] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:30:20.878432 env[1561]: 2025-11-01 01:30:20.877 [INFO][6208] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c" Nov 1 01:30:20.879082 env[1561]: time="2025-11-01T01:30:20.878454293Z" level=info msg="TearDown network for sandbox \"17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c\" successfully" Nov 1 01:30:20.880079 env[1561]: time="2025-11-01T01:30:20.880065703Z" level=info msg="RemovePodSandbox \"17fdd0fae99bb21b89e442a84da4952c0ec6d243858eaae00325f004aeccee0c\" returns successfully" Nov 1 01:30:28.155338 kubelet[2505]: E1101 01:30:28.155225 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-lqwd9" podUID="cb08aa02-32db-4371-b5cc-c9a5a7fd22c8" Nov 1 01:30:28.156535 kubelet[2505]: E1101 01:30:28.156329 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649c6d6f48-6pq8q" podUID="2743a542-7119-47a8-937d-fec5c85bdcf2" Nov 1 01:30:29.154479 kubelet[2505]: E1101 01:30:29.154416 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6xhxh" podUID="9aac0066-097a-4582-8ce8-a3a1ddb41b3d" Nov 1 01:30:29.154985 kubelet[2505]: E1101 01:30:29.154913 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9wz7k" podUID="79df0ba2-6e86-422c-8f93-652dfb942b69" Nov 1 01:30:30.156357 kubelet[2505]: E1101 01:30:30.156258 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bff9f9fd4-4vmq2" podUID="593908a5-f718-4b03-b095-540ff204a4bd" Nov 1 01:30:30.156357 kubelet[2505]: E1101 01:30:30.156294 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-rhjss" podUID="438a7b01-7b7b-439d-a5c9-a6d4d681a41f" Nov 1 01:30:39.156507 env[1561]: time="2025-11-01T01:30:39.156390015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:30:39.568440 env[1561]: time="2025-11-01T01:30:39.568335642Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:30:39.568995 env[1561]: time="2025-11-01T01:30:39.568903366Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:30:39.569268 kubelet[2505]: E1101 01:30:39.569212 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:30:39.569729 kubelet[2505]: E1101 01:30:39.569282 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:30:39.569729 kubelet[2505]: E1101 01:30:39.569495 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7b6cfc8885-lqwd9_calico-apiserver(cb08aa02-32db-4371-b5cc-c9a5a7fd22c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:30:39.569729 kubelet[2505]: E1101 01:30:39.569550 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-lqwd9" podUID="cb08aa02-32db-4371-b5cc-c9a5a7fd22c8" Nov 1 01:30:42.155763 env[1561]: time="2025-11-01T01:30:42.155656653Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 01:30:42.514393 env[1561]: time="2025-11-01T01:30:42.514292567Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:30:42.515600 env[1561]: time="2025-11-01T01:30:42.515445361Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 01:30:42.516039 kubelet[2505]: E1101 01:30:42.515913 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:30:42.516039 kubelet[2505]: E1101 01:30:42.516010 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:30:42.517026 kubelet[2505]: E1101 01:30:42.516216 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5bff9f9fd4-4vmq2_calico-system(593908a5-f718-4b03-b095-540ff204a4bd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 01:30:42.517026 kubelet[2505]: E1101 01:30:42.516333 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bff9f9fd4-4vmq2" podUID="593908a5-f718-4b03-b095-540ff204a4bd" Nov 1 01:30:43.154056 env[1561]: time="2025-11-01T01:30:43.153982084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 01:30:43.562174 env[1561]: time="2025-11-01T01:30:43.562040344Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:30:43.563180 env[1561]: time="2025-11-01T01:30:43.563036294Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 01:30:43.563604 kubelet[2505]: E1101 01:30:43.563471 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:30:43.563604 kubelet[2505]: E1101 01:30:43.563572 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:30:43.564512 kubelet[2505]: E1101 01:30:43.563743 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-649c6d6f48-6pq8q_calico-system(2743a542-7119-47a8-937d-fec5c85bdcf2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 01:30:43.565595 env[1561]: time="2025-11-01T01:30:43.565523942Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 01:30:43.943199 env[1561]: time="2025-11-01T01:30:43.943070102Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:30:43.944097 env[1561]: time="2025-11-01T01:30:43.943971041Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 01:30:43.944534 kubelet[2505]: E1101 01:30:43.944458 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:30:43.944738 kubelet[2505]: E1101 01:30:43.944552 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:30:43.944873 kubelet[2505]: E1101 01:30:43.944728 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-649c6d6f48-6pq8q_calico-system(2743a542-7119-47a8-937d-fec5c85bdcf2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 01:30:43.945009 kubelet[2505]: E1101 01:30:43.944832 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649c6d6f48-6pq8q" podUID="2743a542-7119-47a8-937d-fec5c85bdcf2" Nov 1 01:30:44.154896 env[1561]: time="2025-11-01T01:30:44.154841402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 01:30:44.525196 env[1561]: time="2025-11-01T01:30:44.525139125Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:30:44.525913 env[1561]: time="2025-11-01T01:30:44.525872650Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 01:30:44.526159 kubelet[2505]: E1101 01:30:44.526124 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:30:44.526235 kubelet[2505]: E1101 01:30:44.526170 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:30:44.526596 env[1561]: time="2025-11-01T01:30:44.526439952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:30:44.526778 kubelet[2505]: E1101 01:30:44.526760 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-9wz7k_calico-system(79df0ba2-6e86-422c-8f93-652dfb942b69): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 01:30:44.887696 env[1561]: time="2025-11-01T01:30:44.887620290Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:30:44.899237 env[1561]: time="2025-11-01T01:30:44.899206571Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:30:44.899415 kubelet[2505]: E1101 01:30:44.899386 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:30:44.899615 kubelet[2505]: E1101 01:30:44.899421 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:30:44.899615 kubelet[2505]: E1101 01:30:44.899551 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7b6cfc8885-rhjss_calico-apiserver(438a7b01-7b7b-439d-a5c9-a6d4d681a41f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:30:44.899615 kubelet[2505]: E1101 01:30:44.899585 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-rhjss" podUID="438a7b01-7b7b-439d-a5c9-a6d4d681a41f" Nov 1 01:30:44.899716 env[1561]: time="2025-11-01T01:30:44.899612138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 01:30:45.264784 env[1561]: time="2025-11-01T01:30:45.264742619Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:30:45.265168 env[1561]: time="2025-11-01T01:30:45.265137207Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 01:30:45.265366 kubelet[2505]: E1101 01:30:45.265335 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:30:45.265448 kubelet[2505]: E1101 01:30:45.265374 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:30:45.265587 kubelet[2505]: E1101 01:30:45.265538 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-6xhxh_calico-system(9aac0066-097a-4582-8ce8-a3a1ddb41b3d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 01:30:45.265587 kubelet[2505]: E1101 01:30:45.265574 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6xhxh" podUID="9aac0066-097a-4582-8ce8-a3a1ddb41b3d" Nov 1 01:30:45.265705 env[1561]: time="2025-11-01T01:30:45.265646922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 01:30:45.666773 env[1561]: time="2025-11-01T01:30:45.666507762Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:30:45.668828 env[1561]: time="2025-11-01T01:30:45.668641440Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 01:30:45.669232 kubelet[2505]: E1101 01:30:45.669123 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:30:45.669534 kubelet[2505]: E1101 01:30:45.669250 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:30:45.669787 kubelet[2505]: E1101 01:30:45.669525 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-9wz7k_calico-system(79df0ba2-6e86-422c-8f93-652dfb942b69): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 01:30:45.669787 kubelet[2505]: E1101 01:30:45.669694 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9wz7k" podUID="79df0ba2-6e86-422c-8f93-652dfb942b69" Nov 1 01:30:53.155336 kubelet[2505]: E1101 01:30:53.155216 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-lqwd9" podUID="cb08aa02-32db-4371-b5cc-c9a5a7fd22c8" Nov 1 01:30:55.155262 kubelet[2505]: E1101 01:30:55.155155 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bff9f9fd4-4vmq2" podUID="593908a5-f718-4b03-b095-540ff204a4bd" Nov 1 01:30:56.155556 kubelet[2505]: E1101 01:30:56.155479 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6xhxh" podUID="9aac0066-097a-4582-8ce8-a3a1ddb41b3d" Nov 1 01:30:57.155818 kubelet[2505]: E1101 01:30:57.155711 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-rhjss" podUID="438a7b01-7b7b-439d-a5c9-a6d4d681a41f" Nov 1 01:30:58.156166 kubelet[2505]: E1101 01:30:58.156057 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649c6d6f48-6pq8q" podUID="2743a542-7119-47a8-937d-fec5c85bdcf2" Nov 1 01:30:59.154645 kubelet[2505]: E1101 01:30:59.154613 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9wz7k" podUID="79df0ba2-6e86-422c-8f93-652dfb942b69" Nov 1 01:31:06.156298 kubelet[2505]: E1101 01:31:06.156149 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bff9f9fd4-4vmq2" podUID="593908a5-f718-4b03-b095-540ff204a4bd" Nov 1 01:31:06.157627 kubelet[2505]: E1101 01:31:06.156341 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-lqwd9" podUID="cb08aa02-32db-4371-b5cc-c9a5a7fd22c8" Nov 1 01:31:08.153731 kubelet[2505]: E1101 01:31:08.153679 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6xhxh" podUID="9aac0066-097a-4582-8ce8-a3a1ddb41b3d" Nov 1 01:31:10.154290 kubelet[2505]: E1101 01:31:10.154256 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-rhjss" podUID="438a7b01-7b7b-439d-a5c9-a6d4d681a41f" Nov 1 01:31:13.156641 kubelet[2505]: E1101 01:31:13.156529 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9wz7k" podUID="79df0ba2-6e86-422c-8f93-652dfb942b69" Nov 1 01:31:13.156641 kubelet[2505]: E1101 01:31:13.156558 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649c6d6f48-6pq8q" podUID="2743a542-7119-47a8-937d-fec5c85bdcf2" Nov 1 01:31:15.868000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sda9" ino=520989 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:31:15.909093 kernel: kauditd_printk_skb: 26 callbacks suppressed Nov 1 01:31:15.909208 kernel: audit: type=1400 audit(1761960675.868:1327): avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sda9" ino=520989 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:31:15.868000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0015e8b40 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:31:16.112487 kernel: audit: type=1300 audit(1761960675.868:1327): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0015e8b40 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:31:16.112567 kernel: audit: type=1400 audit(1761960675.868:1328): avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:31:15.868000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:31:15.868000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:31:16.291364 kernel: audit: type=1327 audit(1761960675.868:1327): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:31:16.291439 kernel: audit: type=1300 audit(1761960675.868:1328): arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c003350520 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:31:15.868000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c003350520 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:31:16.412064 kernel: audit: type=1327 audit(1761960675.868:1328): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:31:15.868000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:31:15.899000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="sda9" ino=520985 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:31:16.596319 kernel: audit: type=1400 audit(1761960675.899:1329): avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="sda9" ino=520985 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:31:16.596410 kernel: audit: type=1300 audit(1761960675.899:1329): arch=c000003e syscall=254 success=no exit=-13 a0=74 a1=c014f0aa80 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:31:15.899000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=74 a1=c014f0aa80 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:31:16.695009 kernel: audit: type=1327 audit(1761960675.899:1329): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:31:15.899000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:31:16.788467 kernel: audit: type=1400 audit(1761960675.899:1330): avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:31:15.899000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:31:15.899000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=77 a1=c0108d6900 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:31:15.899000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:31:15.899000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="sda9" ino=520991 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:31:15.899000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=74 a1=c00ecdeba0 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:31:15.899000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:31:15.899000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sda9" ino=520989 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:31:15.899000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=79 a1=c000c93350 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:31:15.899000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:31:15.901000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sda9" ino=520989 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:31:15.901000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=74 a1=c00827c330 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:31:15.901000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:31:15.901000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:31:15.901000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=77 a1=c006517de0 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:31:15.901000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:31:18.694000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:31:18.694000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c003350620 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:31:18.694000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:31:18.694000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:31:18.694000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c003927860 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:31:18.694000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:31:18.694000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:31:18.694000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c002a6d040 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:31:18.694000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:31:18.694000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:31:18.694000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c002a6d060 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:31:18.694000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:31:20.156041 kubelet[2505]: E1101 01:31:20.155929 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6xhxh" podUID="9aac0066-097a-4582-8ce8-a3a1ddb41b3d" Nov 1 01:31:20.157059 kubelet[2505]: E1101 01:31:20.156244 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bff9f9fd4-4vmq2" podUID="593908a5-f718-4b03-b095-540ff204a4bd" Nov 1 01:31:20.157264 env[1561]: time="2025-11-01T01:31:20.156757321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:31:20.519519 env[1561]: time="2025-11-01T01:31:20.519357021Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:31:20.520679 env[1561]: time="2025-11-01T01:31:20.520514482Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:31:20.521034 kubelet[2505]: E1101 01:31:20.520902 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:31:20.521034 kubelet[2505]: E1101 01:31:20.521009 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:31:20.521471 kubelet[2505]: E1101 01:31:20.521183 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7b6cfc8885-lqwd9_calico-apiserver(cb08aa02-32db-4371-b5cc-c9a5a7fd22c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:31:20.521471 kubelet[2505]: E1101 01:31:20.521267 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-lqwd9" podUID="cb08aa02-32db-4371-b5cc-c9a5a7fd22c8" Nov 1 01:31:23.154104 kubelet[2505]: E1101 01:31:23.154078 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-rhjss" podUID="438a7b01-7b7b-439d-a5c9-a6d4d681a41f" Nov 1 01:31:25.154414 env[1561]: time="2025-11-01T01:31:25.154344448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 01:31:25.543486 env[1561]: time="2025-11-01T01:31:25.543335584Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:31:25.544578 env[1561]: time="2025-11-01T01:31:25.544389733Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 01:31:25.544970 kubelet[2505]: E1101 01:31:25.544850 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:31:25.544970 kubelet[2505]: E1101 01:31:25.544935 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:31:25.545958 kubelet[2505]: E1101 01:31:25.545104 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-649c6d6f48-6pq8q_calico-system(2743a542-7119-47a8-937d-fec5c85bdcf2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 01:31:25.546998 env[1561]: time="2025-11-01T01:31:25.546872123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 01:31:25.929858 env[1561]: time="2025-11-01T01:31:25.929627456Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:31:25.930894 env[1561]: time="2025-11-01T01:31:25.930737117Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 01:31:25.931283 kubelet[2505]: E1101 01:31:25.931180 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:31:25.931495 kubelet[2505]: E1101 01:31:25.931277 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:31:25.931652 kubelet[2505]: E1101 01:31:25.931470 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-649c6d6f48-6pq8q_calico-system(2743a542-7119-47a8-937d-fec5c85bdcf2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 01:31:25.931787 kubelet[2505]: E1101 01:31:25.931629 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649c6d6f48-6pq8q" podUID="2743a542-7119-47a8-937d-fec5c85bdcf2" Nov 1 01:31:28.156515 env[1561]: time="2025-11-01T01:31:28.156351342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 01:31:28.520262 env[1561]: time="2025-11-01T01:31:28.520114898Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:31:28.521155 env[1561]: time="2025-11-01T01:31:28.521006450Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 01:31:28.521580 kubelet[2505]: E1101 01:31:28.521449 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:31:28.521580 kubelet[2505]: E1101 01:31:28.521539 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:31:28.522746 kubelet[2505]: E1101 01:31:28.521693 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-9wz7k_calico-system(79df0ba2-6e86-422c-8f93-652dfb942b69): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 01:31:28.523850 env[1561]: time="2025-11-01T01:31:28.523716199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 01:31:28.904689 env[1561]: time="2025-11-01T01:31:28.904538152Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:31:28.905134 env[1561]: time="2025-11-01T01:31:28.905040022Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 01:31:28.905331 kubelet[2505]: E1101 01:31:28.905290 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:31:28.905445 kubelet[2505]: E1101 01:31:28.905341 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:31:28.905512 kubelet[2505]: E1101 01:31:28.905441 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-9wz7k_calico-system(79df0ba2-6e86-422c-8f93-652dfb942b69): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 01:31:28.905616 kubelet[2505]: E1101 01:31:28.905500 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9wz7k" podUID="79df0ba2-6e86-422c-8f93-652dfb942b69" Nov 1 01:31:31.154127 env[1561]: time="2025-11-01T01:31:31.154100911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 01:31:31.510502 env[1561]: time="2025-11-01T01:31:31.510436476Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:31:31.510846 env[1561]: time="2025-11-01T01:31:31.510789907Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 01:31:31.511000 kubelet[2505]: E1101 01:31:31.510938 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:31:31.511000 kubelet[2505]: E1101 01:31:31.510973 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:31:31.511262 kubelet[2505]: E1101 01:31:31.511037 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-6xhxh_calico-system(9aac0066-097a-4582-8ce8-a3a1ddb41b3d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 01:31:31.511262 kubelet[2505]: E1101 01:31:31.511063 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6xhxh" podUID="9aac0066-097a-4582-8ce8-a3a1ddb41b3d" Nov 1 01:31:33.155785 env[1561]: time="2025-11-01T01:31:33.155660196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 01:31:33.510869 env[1561]: time="2025-11-01T01:31:33.510759738Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:31:33.511828 env[1561]: time="2025-11-01T01:31:33.511717581Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 01:31:33.512226 kubelet[2505]: E1101 01:31:33.512147 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:31:33.513007 kubelet[2505]: E1101 01:31:33.512243 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:31:33.513007 kubelet[2505]: E1101 01:31:33.512435 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5bff9f9fd4-4vmq2_calico-system(593908a5-f718-4b03-b095-540ff204a4bd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 01:31:33.513007 kubelet[2505]: E1101 01:31:33.512526 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bff9f9fd4-4vmq2" podUID="593908a5-f718-4b03-b095-540ff204a4bd" Nov 1 01:31:34.153682 env[1561]: time="2025-11-01T01:31:34.153651350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:31:34.519436 env[1561]: time="2025-11-01T01:31:34.519370180Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:31:34.519794 env[1561]: time="2025-11-01T01:31:34.519740892Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:31:34.519955 kubelet[2505]: E1101 01:31:34.519901 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:31:34.519955 kubelet[2505]: E1101 01:31:34.519929 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:31:34.520118 kubelet[2505]: E1101 01:31:34.519984 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7b6cfc8885-rhjss_calico-apiserver(438a7b01-7b7b-439d-a5c9-a6d4d681a41f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:31:34.520118 kubelet[2505]: E1101 01:31:34.520004 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-rhjss" podUID="438a7b01-7b7b-439d-a5c9-a6d4d681a41f" Nov 1 01:31:35.156139 kubelet[2505]: E1101 01:31:35.156007 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-lqwd9" podUID="cb08aa02-32db-4371-b5cc-c9a5a7fd22c8" Nov 1 01:31:41.154538 kubelet[2505]: E1101 01:31:41.154462 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649c6d6f48-6pq8q" podUID="2743a542-7119-47a8-937d-fec5c85bdcf2" Nov 1 01:31:41.154538 kubelet[2505]: E1101 01:31:41.154462 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9wz7k" podUID="79df0ba2-6e86-422c-8f93-652dfb942b69" Nov 1 01:31:45.155774 kubelet[2505]: E1101 01:31:45.155640 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6xhxh" podUID="9aac0066-097a-4582-8ce8-a3a1ddb41b3d" Nov 1 01:31:45.156781 kubelet[2505]: E1101 01:31:45.155785 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bff9f9fd4-4vmq2" podUID="593908a5-f718-4b03-b095-540ff204a4bd" Nov 1 01:31:46.155949 kubelet[2505]: E1101 01:31:46.155816 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-rhjss" podUID="438a7b01-7b7b-439d-a5c9-a6d4d681a41f" Nov 1 01:31:49.154364 kubelet[2505]: E1101 01:31:49.154326 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-lqwd9" podUID="cb08aa02-32db-4371-b5cc-c9a5a7fd22c8" Nov 1 01:31:55.154382 kubelet[2505]: E1101 01:31:55.154354 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9wz7k" podUID="79df0ba2-6e86-422c-8f93-652dfb942b69" Nov 1 01:31:56.160408 kubelet[2505]: E1101 01:31:56.160367 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649c6d6f48-6pq8q" podUID="2743a542-7119-47a8-937d-fec5c85bdcf2" Nov 1 01:31:57.153855 kubelet[2505]: E1101 01:31:57.153783 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6xhxh" podUID="9aac0066-097a-4582-8ce8-a3a1ddb41b3d" Nov 1 01:31:58.154315 kubelet[2505]: E1101 01:31:58.154290 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-rhjss" podUID="438a7b01-7b7b-439d-a5c9-a6d4d681a41f" Nov 1 01:32:00.156566 kubelet[2505]: E1101 01:32:00.156460 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bff9f9fd4-4vmq2" podUID="593908a5-f718-4b03-b095-540ff204a4bd" Nov 1 01:32:04.154052 kubelet[2505]: E1101 01:32:04.154023 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-lqwd9" podUID="cb08aa02-32db-4371-b5cc-c9a5a7fd22c8" Nov 1 01:32:07.157218 kubelet[2505]: E1101 01:32:07.157110 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649c6d6f48-6pq8q" podUID="2743a542-7119-47a8-937d-fec5c85bdcf2" Nov 1 01:32:08.155044 kubelet[2505]: E1101 01:32:08.154962 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9wz7k" podUID="79df0ba2-6e86-422c-8f93-652dfb942b69" Nov 1 01:32:09.155831 kubelet[2505]: E1101 01:32:09.155788 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6xhxh" podUID="9aac0066-097a-4582-8ce8-a3a1ddb41b3d" Nov 1 01:32:12.155877 kubelet[2505]: E1101 01:32:12.155750 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-rhjss" podUID="438a7b01-7b7b-439d-a5c9-a6d4d681a41f" Nov 1 01:32:12.156843 kubelet[2505]: E1101 01:32:12.156075 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bff9f9fd4-4vmq2" podUID="593908a5-f718-4b03-b095-540ff204a4bd" Nov 1 01:32:15.155428 kubelet[2505]: E1101 01:32:15.155324 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-lqwd9" podUID="cb08aa02-32db-4371-b5cc-c9a5a7fd22c8" Nov 1 01:32:15.868000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sda9" ino=520989 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:32:15.897003 kernel: kauditd_printk_skb: 26 callbacks suppressed Nov 1 01:32:15.897083 kernel: audit: type=1400 audit(1761960735.868:1339): avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sda9" ino=520989 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:32:15.868000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c002ebca20 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:32:16.107297 kernel: audit: type=1300 audit(1761960735.868:1339): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c002ebca20 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:32:16.107389 kernel: audit: type=1327 audit(1761960735.868:1339): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:32:15.868000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:32:15.868000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:32:16.200468 kernel: audit: type=1400 audit(1761960735.868:1340): avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:32:15.868000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c001d0b1a0 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:32:16.410103 kernel: audit: type=1300 audit(1761960735.868:1340): arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c001d0b1a0 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:32:16.410180 kernel: audit: type=1327 audit(1761960735.868:1340): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:32:15.868000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:32:16.503417 kernel: audit: type=1400 audit(1761960735.899:1341): avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="sda9" ino=520985 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:32:15.899000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="sda9" ino=520985 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:32:15.899000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=68 a1=c012036900 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:32:16.692857 kernel: audit: type=1300 audit(1761960735.899:1341): arch=c000003e syscall=254 success=no exit=-13 a0=68 a1=c012036900 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:32:16.692917 kernel: audit: type=1327 audit(1761960735.899:1341): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:32:15.899000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:32:15.899000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="sda9" ino=520991 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:32:16.877859 kernel: audit: type=1400 audit(1761960735.899:1342): avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="sda9" ino=520991 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:32:15.899000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=70 a1=c00ef04ae0 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:32:15.899000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:32:15.899000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:32:15.899000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=68 a1=c0035b8160 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:32:15.899000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:32:15.899000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sda9" ino=520989 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:32:15.899000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=68 a1=c013252030 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:32:15.899000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:32:15.900000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sda9" ino=520989 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:32:15.900000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=68 a1=c01321c000 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:32:15.900000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:32:15.900000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:32:15.900000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=68 a1=c007ebef20 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:32:15.900000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:32:18.693000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:32:18.693000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c003351420 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:32:18.693000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:32:18.693000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:32:18.693000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c003351440 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:32:18.693000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:32:18.693000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:32:18.693000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c00008e8e0 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:32:18.693000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:32:18.693000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:32:18.693000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0009d2240 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:32:18.693000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:32:19.154401 kubelet[2505]: E1101 01:32:19.154331 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649c6d6f48-6pq8q" podUID="2743a542-7119-47a8-937d-fec5c85bdcf2" Nov 1 01:32:22.153704 kubelet[2505]: E1101 01:32:22.153677 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6xhxh" podUID="9aac0066-097a-4582-8ce8-a3a1ddb41b3d" Nov 1 01:32:23.156937 kubelet[2505]: E1101 01:32:23.156830 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9wz7k" podUID="79df0ba2-6e86-422c-8f93-652dfb942b69" Nov 1 01:32:25.154106 kubelet[2505]: E1101 01:32:25.154062 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-rhjss" podUID="438a7b01-7b7b-439d-a5c9-a6d4d681a41f" Nov 1 01:32:26.154332 kubelet[2505]: E1101 01:32:26.154274 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bff9f9fd4-4vmq2" podUID="593908a5-f718-4b03-b095-540ff204a4bd" Nov 1 01:32:27.154143 kubelet[2505]: E1101 01:32:27.154119 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-lqwd9" podUID="cb08aa02-32db-4371-b5cc-c9a5a7fd22c8" Nov 1 01:32:30.154298 kubelet[2505]: E1101 01:32:30.154255 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649c6d6f48-6pq8q" podUID="2743a542-7119-47a8-937d-fec5c85bdcf2" Nov 1 01:32:34.153734 kubelet[2505]: E1101 01:32:34.153670 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6xhxh" podUID="9aac0066-097a-4582-8ce8-a3a1ddb41b3d" Nov 1 01:32:37.154209 kubelet[2505]: E1101 01:32:37.154181 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-rhjss" podUID="438a7b01-7b7b-439d-a5c9-a6d4d681a41f" Nov 1 01:32:37.154528 kubelet[2505]: E1101 01:32:37.154446 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9wz7k" podUID="79df0ba2-6e86-422c-8f93-652dfb942b69" Nov 1 01:32:38.154127 kubelet[2505]: E1101 01:32:38.154093 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-lqwd9" podUID="cb08aa02-32db-4371-b5cc-c9a5a7fd22c8" Nov 1 01:32:40.154591 kubelet[2505]: E1101 01:32:40.154561 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bff9f9fd4-4vmq2" podUID="593908a5-f718-4b03-b095-540ff204a4bd" Nov 1 01:32:41.154445 kubelet[2505]: E1101 01:32:41.154419 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649c6d6f48-6pq8q" podUID="2743a542-7119-47a8-937d-fec5c85bdcf2" Nov 1 01:32:48.154610 kubelet[2505]: E1101 01:32:48.154563 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6xhxh" podUID="9aac0066-097a-4582-8ce8-a3a1ddb41b3d" Nov 1 01:32:49.154595 env[1561]: time="2025-11-01T01:32:49.154564533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 01:32:49.535571 env[1561]: time="2025-11-01T01:32:49.535510310Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:32:49.549317 env[1561]: time="2025-11-01T01:32:49.549260846Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 01:32:49.549498 kubelet[2505]: E1101 01:32:49.549449 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:32:49.549498 kubelet[2505]: E1101 01:32:49.549478 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:32:49.549726 kubelet[2505]: E1101 01:32:49.549530 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-9wz7k_calico-system(79df0ba2-6e86-422c-8f93-652dfb942b69): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 01:32:49.550200 env[1561]: time="2025-11-01T01:32:49.550158777Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 01:32:49.885821 env[1561]: time="2025-11-01T01:32:49.885555757Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:32:49.886631 env[1561]: time="2025-11-01T01:32:49.886516534Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 01:32:49.887142 kubelet[2505]: E1101 01:32:49.887000 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:32:49.887142 kubelet[2505]: E1101 01:32:49.887114 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:32:49.887555 kubelet[2505]: E1101 01:32:49.887282 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-9wz7k_calico-system(79df0ba2-6e86-422c-8f93-652dfb942b69): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 01:32:49.887555 kubelet[2505]: E1101 01:32:49.887437 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9wz7k" podUID="79df0ba2-6e86-422c-8f93-652dfb942b69" Nov 1 01:32:50.162010 env[1561]: time="2025-11-01T01:32:50.161768898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:32:50.520571 env[1561]: time="2025-11-01T01:32:50.520457644Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:32:50.521559 env[1561]: time="2025-11-01T01:32:50.521419736Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:32:50.522027 kubelet[2505]: E1101 01:32:50.521939 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:32:50.522212 kubelet[2505]: E1101 01:32:50.522048 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:32:50.522346 kubelet[2505]: E1101 01:32:50.522251 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7b6cfc8885-lqwd9_calico-apiserver(cb08aa02-32db-4371-b5cc-c9a5a7fd22c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:32:50.522505 kubelet[2505]: E1101 01:32:50.522376 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-lqwd9" podUID="cb08aa02-32db-4371-b5cc-c9a5a7fd22c8" Nov 1 01:32:51.153748 kubelet[2505]: E1101 01:32:51.153723 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-rhjss" podUID="438a7b01-7b7b-439d-a5c9-a6d4d681a41f" Nov 1 01:32:54.153661 env[1561]: time="2025-11-01T01:32:54.153621328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 01:32:54.520726 env[1561]: time="2025-11-01T01:32:54.520564452Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:32:54.521573 env[1561]: time="2025-11-01T01:32:54.521435530Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 01:32:54.521961 kubelet[2505]: E1101 01:32:54.521858 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:32:54.522792 kubelet[2505]: E1101 01:32:54.521971 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:32:54.522792 kubelet[2505]: E1101 01:32:54.522335 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5bff9f9fd4-4vmq2_calico-system(593908a5-f718-4b03-b095-540ff204a4bd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 01:32:54.522792 kubelet[2505]: E1101 01:32:54.522481 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bff9f9fd4-4vmq2" podUID="593908a5-f718-4b03-b095-540ff204a4bd" Nov 1 01:32:54.523394 env[1561]: time="2025-11-01T01:32:54.522837568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 01:32:54.863139 env[1561]: time="2025-11-01T01:32:54.863015506Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:32:54.863628 env[1561]: time="2025-11-01T01:32:54.863553056Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 01:32:54.863843 kubelet[2505]: E1101 01:32:54.863778 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:32:54.863843 kubelet[2505]: E1101 01:32:54.863823 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:32:54.863967 kubelet[2505]: E1101 01:32:54.863892 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-649c6d6f48-6pq8q_calico-system(2743a542-7119-47a8-937d-fec5c85bdcf2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 01:32:54.864604 env[1561]: time="2025-11-01T01:32:54.864559364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 01:32:55.216643 env[1561]: time="2025-11-01T01:32:55.216531601Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:32:55.217591 env[1561]: time="2025-11-01T01:32:55.217372381Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 01:32:55.217925 kubelet[2505]: E1101 01:32:55.217836 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:32:55.218110 kubelet[2505]: E1101 01:32:55.217939 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:32:55.218110 kubelet[2505]: E1101 01:32:55.218089 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-649c6d6f48-6pq8q_calico-system(2743a542-7119-47a8-937d-fec5c85bdcf2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 01:32:55.218319 kubelet[2505]: E1101 01:32:55.218176 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649c6d6f48-6pq8q" podUID="2743a542-7119-47a8-937d-fec5c85bdcf2" Nov 1 01:33:00.158413 kubelet[2505]: E1101 01:33:00.158285 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9wz7k" podUID="79df0ba2-6e86-422c-8f93-652dfb942b69" Nov 1 01:33:02.154930 env[1561]: time="2025-11-01T01:33:02.154894053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 01:33:02.532625 env[1561]: time="2025-11-01T01:33:02.532579235Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:33:02.533088 env[1561]: time="2025-11-01T01:33:02.533019532Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 01:33:02.533289 kubelet[2505]: E1101 01:33:02.533252 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:33:02.533709 kubelet[2505]: E1101 01:33:02.533297 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:33:02.533709 kubelet[2505]: E1101 01:33:02.533379 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-6xhxh_calico-system(9aac0066-097a-4582-8ce8-a3a1ddb41b3d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 01:33:02.533709 kubelet[2505]: E1101 01:33:02.533419 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6xhxh" podUID="9aac0066-097a-4582-8ce8-a3a1ddb41b3d" Nov 1 01:33:04.154030 kubelet[2505]: E1101 01:33:04.153976 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-lqwd9" podUID="cb08aa02-32db-4371-b5cc-c9a5a7fd22c8" Nov 1 01:33:04.154589 env[1561]: time="2025-11-01T01:33:04.154247856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:33:04.537875 env[1561]: time="2025-11-01T01:33:04.537726901Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:33:04.538778 env[1561]: time="2025-11-01T01:33:04.538632079Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:33:04.539221 kubelet[2505]: E1101 01:33:04.539115 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:33:04.539423 kubelet[2505]: E1101 01:33:04.539211 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:33:04.539612 kubelet[2505]: E1101 01:33:04.539379 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7b6cfc8885-rhjss_calico-apiserver(438a7b01-7b7b-439d-a5c9-a6d4d681a41f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:33:04.539612 kubelet[2505]: E1101 01:33:04.539506 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-rhjss" podUID="438a7b01-7b7b-439d-a5c9-a6d4d681a41f" Nov 1 01:33:06.156021 kubelet[2505]: E1101 01:33:06.155912 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bff9f9fd4-4vmq2" podUID="593908a5-f718-4b03-b095-540ff204a4bd" Nov 1 01:33:09.156832 kubelet[2505]: E1101 01:33:09.156662 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649c6d6f48-6pq8q" podUID="2743a542-7119-47a8-937d-fec5c85bdcf2" Nov 1 01:33:11.156186 kubelet[2505]: E1101 01:33:11.156077 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9wz7k" podUID="79df0ba2-6e86-422c-8f93-652dfb942b69" Nov 1 01:33:14.155222 kubelet[2505]: E1101 01:33:14.155110 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6xhxh" podUID="9aac0066-097a-4582-8ce8-a3a1ddb41b3d" Nov 1 01:33:15.870000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:33:15.912494 kernel: kauditd_printk_skb: 26 callbacks suppressed Nov 1 01:33:15.912598 kernel: audit: type=1400 audit(1761960795.870:1351): avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:33:15.870000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0024c9720 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:33:16.121497 kernel: audit: type=1300 audit(1761960795.870:1351): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0024c9720 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:33:16.121583 kernel: audit: type=1327 audit(1761960795.870:1351): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:33:15.870000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:33:16.214077 kernel: audit: type=1400 audit(1761960795.870:1352): avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sda9" ino=520989 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:33:15.870000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sda9" ino=520989 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:33:15.870000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c00307f860 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:33:16.425083 kernel: audit: type=1300 audit(1761960795.870:1352): arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c00307f860 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:33:16.425167 kernel: audit: type=1327 audit(1761960795.870:1352): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:33:15.870000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:33:15.900000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sda9" ino=520989 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:33:15.900000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=68 a1=c006f0ca80 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:33:16.708207 kernel: audit: type=1400 audit(1761960795.900:1353): avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sda9" ino=520989 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:33:16.708297 kernel: audit: type=1300 audit(1761960795.900:1353): arch=c000003e syscall=254 success=no exit=-13 a0=68 a1=c006f0ca80 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:33:16.708317 kernel: audit: type=1327 audit(1761960795.900:1353): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:33:15.900000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:33:15.900000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:33:16.891728 kernel: audit: type=1400 audit(1761960795.900:1354): avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:33:15.900000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6f a1=c0079be800 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:33:15.900000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:33:15.900000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="sda9" ino=520991 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:33:15.900000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6f a1=c00f2a2000 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:33:15.900000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:33:15.900000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="sda9" ino=520985 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:33:15.900000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=68 a1=c014c19650 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:33:15.900000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:33:15.900000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sda9" ino=520989 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:33:15.900000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6f a1=c00f2a2030 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:33:15.900000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:33:15.900000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:33:15.900000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=70 a1=c0011914a0 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:33:15.900000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:33:17.155911 kubelet[2505]: E1101 01:33:17.155708 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-lqwd9" podUID="cb08aa02-32db-4371-b5cc-c9a5a7fd22c8" Nov 1 01:33:17.155911 kubelet[2505]: E1101 01:33:17.155808 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-rhjss" podUID="438a7b01-7b7b-439d-a5c9-a6d4d681a41f" Nov 1 01:33:18.693000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:33:18.693000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c001d0a8c0 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:33:18.693000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:33:18.694000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:33:18.694000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c003350b20 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:33:18.694000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:33:18.694000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:33:18.694000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0024c9740 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:33:18.694000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:33:18.694000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:33:18.694000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c000d9ee00 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:33:18.694000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:33:19.156156 kubelet[2505]: E1101 01:33:19.155930 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bff9f9fd4-4vmq2" podUID="593908a5-f718-4b03-b095-540ff204a4bd" Nov 1 01:33:22.157089 kubelet[2505]: E1101 01:33:22.156985 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649c6d6f48-6pq8q" podUID="2743a542-7119-47a8-937d-fec5c85bdcf2" Nov 1 01:33:24.156793 kubelet[2505]: E1101 01:33:24.156684 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9wz7k" podUID="79df0ba2-6e86-422c-8f93-652dfb942b69" Nov 1 01:33:26.153907 kubelet[2505]: E1101 01:33:26.153869 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6xhxh" podUID="9aac0066-097a-4582-8ce8-a3a1ddb41b3d" Nov 1 01:33:28.154344 kubelet[2505]: E1101 01:33:28.154286 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-lqwd9" podUID="cb08aa02-32db-4371-b5cc-c9a5a7fd22c8" Nov 1 01:33:29.155605 kubelet[2505]: E1101 01:33:29.155458 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-rhjss" podUID="438a7b01-7b7b-439d-a5c9-a6d4d681a41f" Nov 1 01:33:34.154311 kubelet[2505]: E1101 01:33:34.154273 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bff9f9fd4-4vmq2" podUID="593908a5-f718-4b03-b095-540ff204a4bd" Nov 1 01:33:34.154828 kubelet[2505]: E1101 01:33:34.154808 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649c6d6f48-6pq8q" podUID="2743a542-7119-47a8-937d-fec5c85bdcf2" Nov 1 01:33:38.156589 kubelet[2505]: E1101 01:33:38.156477 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9wz7k" podUID="79df0ba2-6e86-422c-8f93-652dfb942b69" Nov 1 01:33:39.153571 kubelet[2505]: E1101 01:33:39.153548 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6xhxh" podUID="9aac0066-097a-4582-8ce8-a3a1ddb41b3d" Nov 1 01:33:42.153639 kubelet[2505]: E1101 01:33:42.153577 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-lqwd9" podUID="cb08aa02-32db-4371-b5cc-c9a5a7fd22c8" Nov 1 01:33:44.154023 kubelet[2505]: E1101 01:33:44.153996 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-rhjss" podUID="438a7b01-7b7b-439d-a5c9-a6d4d681a41f" Nov 1 01:33:45.155873 kubelet[2505]: E1101 01:33:45.155742 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bff9f9fd4-4vmq2" podUID="593908a5-f718-4b03-b095-540ff204a4bd" Nov 1 01:33:49.155564 kubelet[2505]: E1101 01:33:49.155507 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649c6d6f48-6pq8q" podUID="2743a542-7119-47a8-937d-fec5c85bdcf2" Nov 1 01:33:51.155371 kubelet[2505]: E1101 01:33:51.155268 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6xhxh" podUID="9aac0066-097a-4582-8ce8-a3a1ddb41b3d" Nov 1 01:33:52.154577 kubelet[2505]: E1101 01:33:52.154540 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9wz7k" podUID="79df0ba2-6e86-422c-8f93-652dfb942b69" Nov 1 01:33:57.156366 kubelet[2505]: E1101 01:33:57.156225 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-lqwd9" podUID="cb08aa02-32db-4371-b5cc-c9a5a7fd22c8" Nov 1 01:33:58.153271 kubelet[2505]: E1101 01:33:58.153245 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-rhjss" podUID="438a7b01-7b7b-439d-a5c9-a6d4d681a41f" Nov 1 01:34:00.154260 kubelet[2505]: E1101 01:34:00.154194 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bff9f9fd4-4vmq2" podUID="593908a5-f718-4b03-b095-540ff204a4bd" Nov 1 01:34:04.155351 kubelet[2505]: E1101 01:34:04.155259 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6xhxh" podUID="9aac0066-097a-4582-8ce8-a3a1ddb41b3d" Nov 1 01:34:04.156667 kubelet[2505]: E1101 01:34:04.156520 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649c6d6f48-6pq8q" podUID="2743a542-7119-47a8-937d-fec5c85bdcf2" Nov 1 01:34:06.154110 kubelet[2505]: E1101 01:34:06.154022 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9wz7k" podUID="79df0ba2-6e86-422c-8f93-652dfb942b69" Nov 1 01:34:09.155647 kubelet[2505]: E1101 01:34:09.155524 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-rhjss" podUID="438a7b01-7b7b-439d-a5c9-a6d4d681a41f" Nov 1 01:34:10.156723 kubelet[2505]: E1101 01:34:10.156632 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-lqwd9" podUID="cb08aa02-32db-4371-b5cc-c9a5a7fd22c8" Nov 1 01:34:12.161612 kubelet[2505]: E1101 01:34:12.161484 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bff9f9fd4-4vmq2" podUID="593908a5-f718-4b03-b095-540ff204a4bd" Nov 1 01:34:15.157116 kubelet[2505]: E1101 01:34:15.157013 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649c6d6f48-6pq8q" podUID="2743a542-7119-47a8-937d-fec5c85bdcf2" Nov 1 01:34:15.870000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sda9" ino=520989 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:34:15.898730 kernel: kauditd_printk_skb: 26 callbacks suppressed Nov 1 01:34:15.898819 kernel: audit: type=1400 audit(1761960855.870:1363): avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sda9" ino=520989 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:34:15.870000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0036798c0 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:34:16.108501 kernel: audit: type=1300 audit(1761960855.870:1363): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0036798c0 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:34:16.108616 kernel: audit: type=1327 audit(1761960855.870:1363): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:34:15.870000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:34:16.154149 kubelet[2505]: E1101 01:34:16.154068 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6xhxh" podUID="9aac0066-097a-4582-8ce8-a3a1ddb41b3d" Nov 1 01:34:15.870000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:34:16.290264 kernel: audit: type=1400 audit(1761960855.870:1364): avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:34:16.290343 kernel: audit: type=1300 audit(1761960855.870:1364): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c002a6cca0 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:34:15.870000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c002a6cca0 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:34:15.870000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:34:16.504004 kernel: audit: type=1327 audit(1761960855.870:1364): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:34:16.504079 kernel: audit: type=1400 audit(1761960855.900:1365): avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:34:15.900000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:34:15.900000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=68 a1=c012ceab40 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:34:16.692588 kernel: audit: type=1300 audit(1761960855.900:1365): arch=c000003e syscall=254 success=no exit=-13 a0=68 a1=c012ceab40 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:34:16.692658 kernel: audit: type=1327 audit(1761960855.900:1365): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:34:15.900000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:34:16.785889 kernel: audit: type=1400 audit(1761960855.901:1367): avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="sda9" ino=520985 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:34:15.901000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="sda9" ino=520985 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:34:15.901000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sda9" ino=520989 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:34:15.901000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6f a1=c00f2a2030 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:34:15.901000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:34:15.901000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=68 a1=c00ecdf3b0 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:34:15.901000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:34:15.901000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="sda9" ino=520991 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:34:15.901000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=71 a1=c00d756000 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:34:15.901000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:34:15.901000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sda9" ino=520989 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:34:15.901000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6f a1=c00d756030 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:34:15.901000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:34:15.901000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:34:15.901000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=70 a1=c01387c720 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:34:15.901000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:34:17.154374 kubelet[2505]: E1101 01:34:17.154288 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9wz7k" podUID="79df0ba2-6e86-422c-8f93-652dfb942b69" Nov 1 01:34:18.694000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:34:18.694000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:34:18.694000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c003927be0 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:34:18.694000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c00008e840 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:34:18.694000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:34:18.694000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:34:18.694000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:34:18.694000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c002a6cd20 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:34:18.694000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:34:18.694000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:34:18.694000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c002a6cd40 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:34:18.694000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:34:23.155688 kubelet[2505]: E1101 01:34:23.155574 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-rhjss" podUID="438a7b01-7b7b-439d-a5c9-a6d4d681a41f" Nov 1 01:34:23.155688 kubelet[2505]: E1101 01:34:23.155606 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-lqwd9" podUID="cb08aa02-32db-4371-b5cc-c9a5a7fd22c8" Nov 1 01:34:26.155847 kubelet[2505]: E1101 01:34:26.155736 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bff9f9fd4-4vmq2" podUID="593908a5-f718-4b03-b095-540ff204a4bd" Nov 1 01:34:26.157108 kubelet[2505]: E1101 01:34:26.156864 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649c6d6f48-6pq8q" podUID="2743a542-7119-47a8-937d-fec5c85bdcf2" Nov 1 01:34:28.158312 kubelet[2505]: E1101 01:34:28.158171 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6xhxh" podUID="9aac0066-097a-4582-8ce8-a3a1ddb41b3d" Nov 1 01:34:32.157312 kubelet[2505]: E1101 01:34:32.157145 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9wz7k" podUID="79df0ba2-6e86-422c-8f93-652dfb942b69" Nov 1 01:34:36.156017 kubelet[2505]: E1101 01:34:36.155899 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-rhjss" podUID="438a7b01-7b7b-439d-a5c9-a6d4d681a41f" Nov 1 01:34:38.155419 kubelet[2505]: E1101 01:34:38.155363 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bff9f9fd4-4vmq2" podUID="593908a5-f718-4b03-b095-540ff204a4bd" Nov 1 01:34:38.155419 kubelet[2505]: E1101 01:34:38.155380 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-lqwd9" podUID="cb08aa02-32db-4371-b5cc-c9a5a7fd22c8" Nov 1 01:34:38.155771 kubelet[2505]: E1101 01:34:38.155584 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649c6d6f48-6pq8q" podUID="2743a542-7119-47a8-937d-fec5c85bdcf2" Nov 1 01:34:40.162238 kubelet[2505]: E1101 01:34:40.162115 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6xhxh" podUID="9aac0066-097a-4582-8ce8-a3a1ddb41b3d" Nov 1 01:34:43.154466 kubelet[2505]: E1101 01:34:43.154439 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9wz7k" podUID="79df0ba2-6e86-422c-8f93-652dfb942b69" Nov 1 01:34:50.156939 kubelet[2505]: E1101 01:34:50.156832 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bff9f9fd4-4vmq2" podUID="593908a5-f718-4b03-b095-540ff204a4bd" Nov 1 01:34:51.154184 kubelet[2505]: E1101 01:34:51.154163 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-rhjss" podUID="438a7b01-7b7b-439d-a5c9-a6d4d681a41f" Nov 1 01:34:52.155972 kubelet[2505]: E1101 01:34:52.155849 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6xhxh" podUID="9aac0066-097a-4582-8ce8-a3a1ddb41b3d" Nov 1 01:34:52.156976 kubelet[2505]: E1101 01:34:52.156553 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649c6d6f48-6pq8q" podUID="2743a542-7119-47a8-937d-fec5c85bdcf2" Nov 1 01:34:53.154346 kubelet[2505]: E1101 01:34:53.154298 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-lqwd9" podUID="cb08aa02-32db-4371-b5cc-c9a5a7fd22c8" Nov 1 01:34:57.156511 kubelet[2505]: E1101 01:34:57.156346 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9wz7k" podUID="79df0ba2-6e86-422c-8f93-652dfb942b69" Nov 1 01:35:01.153783 kubelet[2505]: E1101 01:35:01.153707 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bff9f9fd4-4vmq2" podUID="593908a5-f718-4b03-b095-540ff204a4bd" Nov 1 01:35:04.157502 kubelet[2505]: E1101 01:35:04.157350 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-rhjss" podUID="438a7b01-7b7b-439d-a5c9-a6d4d681a41f" Nov 1 01:35:04.158604 kubelet[2505]: E1101 01:35:04.158588 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-lqwd9" podUID="cb08aa02-32db-4371-b5cc-c9a5a7fd22c8" Nov 1 01:35:06.156505 kubelet[2505]: E1101 01:35:06.156365 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649c6d6f48-6pq8q" podUID="2743a542-7119-47a8-937d-fec5c85bdcf2" Nov 1 01:35:07.153349 kubelet[2505]: E1101 01:35:07.153326 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6xhxh" podUID="9aac0066-097a-4582-8ce8-a3a1ddb41b3d" Nov 1 01:35:10.154495 kubelet[2505]: E1101 01:35:10.154460 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9wz7k" podUID="79df0ba2-6e86-422c-8f93-652dfb942b69" Nov 1 01:35:12.155605 kubelet[2505]: E1101 01:35:12.155513 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bff9f9fd4-4vmq2" podUID="593908a5-f718-4b03-b095-540ff204a4bd" Nov 1 01:35:15.153782 kubelet[2505]: E1101 01:35:15.153730 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-rhjss" podUID="438a7b01-7b7b-439d-a5c9-a6d4d681a41f" Nov 1 01:35:15.871000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:35:15.913605 kernel: kauditd_printk_skb: 26 callbacks suppressed Nov 1 01:35:15.913707 kernel: audit: type=1400 audit(1761960915.871:1375): avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:35:15.871000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0009d2980 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:35:16.003408 kernel: audit: type=1300 audit(1761960915.871:1375): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0009d2980 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:35:15.871000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:35:16.215081 kernel: audit: type=1327 audit(1761960915.871:1375): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:35:16.215146 kernel: audit: type=1400 audit(1761960915.871:1376): avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sda9" ino=520989 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:35:15.871000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sda9" ino=520989 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:35:15.871000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0030c6b10 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:35:16.426021 kernel: audit: type=1300 audit(1761960915.871:1376): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0030c6b10 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:35:16.426099 kernel: audit: type=1327 audit(1761960915.871:1376): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:35:15.871000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:35:16.519311 kernel: audit: type=1400 audit(1761960915.901:1377): avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:35:15.901000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:35:15.901000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=68 a1=c0053f9ee0 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:35:16.708116 kernel: audit: type=1300 audit(1761960915.901:1377): arch=c000003e syscall=254 success=no exit=-13 a0=68 a1=c0053f9ee0 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:35:16.708214 kernel: audit: type=1327 audit(1761960915.901:1377): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:35:15.901000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:35:15.901000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:35:16.891745 kernel: audit: type=1400 audit(1761960915.901:1378): avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:35:15.901000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6f a1=c0053f9f00 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:35:15.901000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:35:15.901000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sda9" ino=520989 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:35:15.901000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=70 a1=c011140600 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:35:15.901000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:35:15.901000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="sda9" ino=520985 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:35:15.901000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=68 a1=c011f9b020 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:35:15.901000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:35:15.901000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="sda9" ino=520991 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:35:15.901000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=70 a1=c0139900c0 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:35:15.901000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:35:15.901000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sda9" ino=520989 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:35:15.901000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=68 a1=c00827c150 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:35:15.901000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:35:18.153878 kubelet[2505]: E1101 01:35:18.153837 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649c6d6f48-6pq8q" podUID="2743a542-7119-47a8-937d-fec5c85bdcf2" Nov 1 01:35:18.695000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:35:18.695000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0009d2b80 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:35:18.695000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:35:18.695000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:35:18.695000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0015b04e0 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:35:18.695000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:35:18.695000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:35:18.695000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c003429180 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:35:18.695000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:35:18.695000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:35:18.695000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c0034291a0 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:35:18.695000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:35:19.155529 kubelet[2505]: E1101 01:35:19.155290 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-lqwd9" podUID="cb08aa02-32db-4371-b5cc-c9a5a7fd22c8" Nov 1 01:35:19.155529 kubelet[2505]: E1101 01:35:19.155290 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6xhxh" podUID="9aac0066-097a-4582-8ce8-a3a1ddb41b3d" Nov 1 01:35:22.157190 kubelet[2505]: E1101 01:35:22.157065 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9wz7k" podUID="79df0ba2-6e86-422c-8f93-652dfb942b69" Nov 1 01:35:26.154099 kubelet[2505]: E1101 01:35:26.154060 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bff9f9fd4-4vmq2" podUID="593908a5-f718-4b03-b095-540ff204a4bd" Nov 1 01:35:28.157002 kubelet[2505]: E1101 01:35:28.156884 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-rhjss" podUID="438a7b01-7b7b-439d-a5c9-a6d4d681a41f" Nov 1 01:35:32.155941 kubelet[2505]: E1101 01:35:32.155850 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6xhxh" podUID="9aac0066-097a-4582-8ce8-a3a1ddb41b3d" Nov 1 01:35:32.156912 env[1561]: time="2025-11-01T01:35:32.156459351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:35:32.498583 env[1561]: time="2025-11-01T01:35:32.498442045Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:35:32.499622 env[1561]: time="2025-11-01T01:35:32.499492404Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:35:32.500047 kubelet[2505]: E1101 01:35:32.499919 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:35:32.500047 kubelet[2505]: E1101 01:35:32.500019 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:35:32.500443 kubelet[2505]: E1101 01:35:32.500228 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7b6cfc8885-lqwd9_calico-apiserver(cb08aa02-32db-4371-b5cc-c9a5a7fd22c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:35:32.500443 kubelet[2505]: E1101 01:35:32.500361 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-lqwd9" podUID="cb08aa02-32db-4371-b5cc-c9a5a7fd22c8" Nov 1 01:35:33.154141 kubelet[2505]: E1101 01:35:33.154080 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649c6d6f48-6pq8q" podUID="2743a542-7119-47a8-937d-fec5c85bdcf2" Nov 1 01:35:37.156162 env[1561]: time="2025-11-01T01:35:37.156036087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 01:35:37.529080 env[1561]: time="2025-11-01T01:35:37.529014602Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:35:37.540122 env[1561]: time="2025-11-01T01:35:37.540065716Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 01:35:37.540279 kubelet[2505]: E1101 01:35:37.540228 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:35:37.540279 kubelet[2505]: E1101 01:35:37.540261 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:35:37.540586 kubelet[2505]: E1101 01:35:37.540311 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-9wz7k_calico-system(79df0ba2-6e86-422c-8f93-652dfb942b69): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 01:35:37.540903 env[1561]: time="2025-11-01T01:35:37.540888570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 01:35:37.925312 env[1561]: time="2025-11-01T01:35:37.925052650Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:35:37.932847 env[1561]: time="2025-11-01T01:35:37.932689550Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 01:35:37.933246 kubelet[2505]: E1101 01:35:37.933129 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:35:37.933246 kubelet[2505]: E1101 01:35:37.933224 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:35:37.933681 kubelet[2505]: E1101 01:35:37.933380 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-9wz7k_calico-system(79df0ba2-6e86-422c-8f93-652dfb942b69): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 01:35:37.933681 kubelet[2505]: E1101 01:35:37.933528 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9wz7k" podUID="79df0ba2-6e86-422c-8f93-652dfb942b69" Nov 1 01:35:38.156543 env[1561]: time="2025-11-01T01:35:38.156451775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 01:35:38.493298 env[1561]: time="2025-11-01T01:35:38.493146636Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:35:38.494069 env[1561]: time="2025-11-01T01:35:38.493959682Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 01:35:38.494579 kubelet[2505]: E1101 01:35:38.494456 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:35:38.494579 kubelet[2505]: E1101 01:35:38.494557 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:35:38.494940 kubelet[2505]: E1101 01:35:38.494723 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5bff9f9fd4-4vmq2_calico-system(593908a5-f718-4b03-b095-540ff204a4bd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 01:35:38.494940 kubelet[2505]: E1101 01:35:38.494805 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bff9f9fd4-4vmq2" podUID="593908a5-f718-4b03-b095-540ff204a4bd" Nov 1 01:35:39.153815 kubelet[2505]: E1101 01:35:39.153790 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-rhjss" podUID="438a7b01-7b7b-439d-a5c9-a6d4d681a41f" Nov 1 01:35:47.153600 kubelet[2505]: E1101 01:35:47.153549 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-lqwd9" podUID="cb08aa02-32db-4371-b5cc-c9a5a7fd22c8" Nov 1 01:35:47.153912 env[1561]: time="2025-11-01T01:35:47.153822125Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 01:35:47.481132 env[1561]: time="2025-11-01T01:35:47.481083522Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:35:47.481627 env[1561]: time="2025-11-01T01:35:47.481578774Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 01:35:47.481845 kubelet[2505]: E1101 01:35:47.481805 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:35:47.481928 kubelet[2505]: E1101 01:35:47.481854 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:35:47.481991 kubelet[2505]: E1101 01:35:47.481937 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-6xhxh_calico-system(9aac0066-097a-4582-8ce8-a3a1ddb41b3d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 01:35:47.481991 kubelet[2505]: E1101 01:35:47.481976 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6xhxh" podUID="9aac0066-097a-4582-8ce8-a3a1ddb41b3d" Nov 1 01:35:48.156701 env[1561]: time="2025-11-01T01:35:48.156557212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 01:35:48.569924 env[1561]: time="2025-11-01T01:35:48.569893908Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:35:48.570307 env[1561]: time="2025-11-01T01:35:48.570287911Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 01:35:48.570493 kubelet[2505]: E1101 01:35:48.570473 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:35:48.570654 kubelet[2505]: E1101 01:35:48.570501 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:35:48.570654 kubelet[2505]: E1101 01:35:48.570550 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-649c6d6f48-6pq8q_calico-system(2743a542-7119-47a8-937d-fec5c85bdcf2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 01:35:48.570940 env[1561]: time="2025-11-01T01:35:48.570926325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 01:35:48.932570 env[1561]: time="2025-11-01T01:35:48.932473172Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:35:48.932916 env[1561]: time="2025-11-01T01:35:48.932881553Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 01:35:48.933098 kubelet[2505]: E1101 01:35:48.933067 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:35:48.933154 kubelet[2505]: E1101 01:35:48.933109 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:35:48.933197 kubelet[2505]: E1101 01:35:48.933175 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-649c6d6f48-6pq8q_calico-system(2743a542-7119-47a8-937d-fec5c85bdcf2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 01:35:48.933416 kubelet[2505]: E1101 01:35:48.933368 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649c6d6f48-6pq8q" podUID="2743a542-7119-47a8-937d-fec5c85bdcf2" Nov 1 01:35:51.156999 kubelet[2505]: E1101 01:35:51.156890 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9wz7k" podUID="79df0ba2-6e86-422c-8f93-652dfb942b69" Nov 1 01:35:52.155743 env[1561]: time="2025-11-01T01:35:52.155654227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:35:52.521305 env[1561]: time="2025-11-01T01:35:52.521235787Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:35:52.521792 env[1561]: time="2025-11-01T01:35:52.521720787Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:35:52.521984 kubelet[2505]: E1101 01:35:52.521924 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:35:52.521984 kubelet[2505]: E1101 01:35:52.521965 2505 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:35:52.522423 kubelet[2505]: E1101 01:35:52.522037 2505 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7b6cfc8885-rhjss_calico-apiserver(438a7b01-7b7b-439d-a5c9-a6d4d681a41f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:35:52.522423 kubelet[2505]: E1101 01:35:52.522070 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-rhjss" podUID="438a7b01-7b7b-439d-a5c9-a6d4d681a41f" Nov 1 01:35:54.155002 kubelet[2505]: E1101 01:35:54.154906 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bff9f9fd4-4vmq2" podUID="593908a5-f718-4b03-b095-540ff204a4bd" Nov 1 01:35:58.154254 kubelet[2505]: E1101 01:35:58.154220 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6xhxh" podUID="9aac0066-097a-4582-8ce8-a3a1ddb41b3d" Nov 1 01:35:58.771261 systemd[1]: Started sshd@10-139.178.94.15:22-147.75.109.163:54466.service. Nov 1 01:35:58.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.94.15:22-147.75.109.163:54466 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:35:58.799306 kernel: kauditd_printk_skb: 26 callbacks suppressed Nov 1 01:35:58.799433 kernel: audit: type=1130 audit(1761960958.770:1387): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.94.15:22-147.75.109.163:54466 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:35:58.916000 audit[6757]: USER_ACCT pid=6757 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:35:58.917613 sshd[6757]: Accepted publickey for core from 147.75.109.163 port 54466 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 01:35:58.918899 sshd[6757]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:35:58.921613 systemd-logind[1596]: New session 12 of user core. Nov 1 01:35:58.922059 systemd[1]: Started session-12.scope. Nov 1 01:35:59.003420 sshd[6757]: pam_unix(sshd:session): session closed for user core Nov 1 01:35:59.004945 systemd[1]: sshd@10-139.178.94.15:22-147.75.109.163:54466.service: Deactivated successfully. Nov 1 01:35:59.005386 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 01:35:59.005783 systemd-logind[1596]: Session 12 logged out. Waiting for processes to exit. Nov 1 01:35:59.006285 systemd-logind[1596]: Removed session 12. Nov 1 01:35:58.917000 audit[6757]: CRED_ACQ pid=6757 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:35:59.099933 kernel: audit: type=1101 audit(1761960958.916:1388): pid=6757 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:35:59.099997 kernel: audit: type=1103 audit(1761960958.917:1389): pid=6757 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:35:59.100018 kernel: audit: type=1006 audit(1761960958.917:1390): pid=6757 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Nov 1 01:35:59.154084 kubelet[2505]: E1101 01:35:59.154035 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-lqwd9" podUID="cb08aa02-32db-4371-b5cc-c9a5a7fd22c8" Nov 1 01:35:59.158593 kernel: audit: type=1300 audit(1761960958.917:1390): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffc935cbc0 a2=3 a3=0 items=0 ppid=1 pid=6757 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:35:58.917000 audit[6757]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffc935cbc0 a2=3 a3=0 items=0 ppid=1 pid=6757 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:35:58.917000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 01:35:59.281815 kernel: audit: type=1327 audit(1761960958.917:1390): proctitle=737368643A20636F7265205B707269765D Nov 1 01:35:59.281857 kernel: audit: type=1105 audit(1761960958.922:1391): pid=6757 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:35:58.922000 audit[6757]: USER_START pid=6757 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:35:58.923000 audit[6759]: CRED_ACQ pid=6759 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:35:59.466843 kernel: audit: type=1103 audit(1761960958.923:1392): pid=6759 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:35:59.466896 kernel: audit: type=1106 audit(1761960959.002:1393): pid=6757 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:35:59.002000 audit[6757]: USER_END pid=6757 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:35:59.002000 audit[6757]: CRED_DISP pid=6757 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:35:59.652451 kernel: audit: type=1104 audit(1761960959.002:1394): pid=6757 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:35:59.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.94.15:22-147.75.109.163:54466 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:36:03.154839 kubelet[2505]: E1101 01:36:03.154800 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9wz7k" podUID="79df0ba2-6e86-422c-8f93-652dfb942b69" Nov 1 01:36:04.011038 systemd[1]: Started sshd@11-139.178.94.15:22-147.75.109.163:60818.service. Nov 1 01:36:04.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-139.178.94.15:22-147.75.109.163:60818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:36:04.038283 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 01:36:04.038333 kernel: audit: type=1130 audit(1761960964.010:1396): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-139.178.94.15:22-147.75.109.163:60818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:36:04.154229 kubelet[2505]: E1101 01:36:04.154178 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649c6d6f48-6pq8q" podUID="2743a542-7119-47a8-937d-fec5c85bdcf2" Nov 1 01:36:04.157000 audit[6818]: USER_ACCT pid=6818 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:04.158280 sshd[6818]: Accepted publickey for core from 147.75.109.163 port 60818 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 01:36:04.159905 sshd[6818]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:36:04.162143 systemd-logind[1596]: New session 13 of user core. Nov 1 01:36:04.162707 systemd[1]: Started session-13.scope. Nov 1 01:36:04.239214 sshd[6818]: pam_unix(sshd:session): session closed for user core Nov 1 01:36:04.240530 systemd[1]: sshd@11-139.178.94.15:22-147.75.109.163:60818.service: Deactivated successfully. Nov 1 01:36:04.240961 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 01:36:04.241281 systemd-logind[1596]: Session 13 logged out. Waiting for processes to exit. Nov 1 01:36:04.241811 systemd-logind[1596]: Removed session 13. Nov 1 01:36:04.159000 audit[6818]: CRED_ACQ pid=6818 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:04.340487 kernel: audit: type=1101 audit(1761960964.157:1397): pid=6818 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:04.340549 kernel: audit: type=1103 audit(1761960964.159:1398): pid=6818 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:04.340572 kernel: audit: type=1006 audit(1761960964.159:1399): pid=6818 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Nov 1 01:36:04.399013 kernel: audit: type=1300 audit(1761960964.159:1399): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc8a801660 a2=3 a3=0 items=0 ppid=1 pid=6818 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:36:04.159000 audit[6818]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc8a801660 a2=3 a3=0 items=0 ppid=1 pid=6818 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:36:04.159000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 01:36:04.521487 kernel: audit: type=1327 audit(1761960964.159:1399): proctitle=737368643A20636F7265205B707269765D Nov 1 01:36:04.521574 kernel: audit: type=1105 audit(1761960964.164:1400): pid=6818 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:04.164000 audit[6818]: USER_START pid=6818 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:04.165000 audit[6820]: CRED_ACQ pid=6820 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:04.705426 kernel: audit: type=1103 audit(1761960964.165:1401): pid=6820 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:04.705489 kernel: audit: type=1106 audit(1761960964.239:1402): pid=6818 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:04.239000 audit[6818]: USER_END pid=6818 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:04.239000 audit[6818]: CRED_DISP pid=6818 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:04.890478 kernel: audit: type=1104 audit(1761960964.239:1403): pid=6818 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:04.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-139.178.94.15:22-147.75.109.163:60818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:36:05.976723 update_engine[1555]: I1101 01:36:05.976663 1555 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Nov 1 01:36:05.976723 update_engine[1555]: I1101 01:36:05.976732 1555 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Nov 1 01:36:05.978389 update_engine[1555]: I1101 01:36:05.978353 1555 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Nov 1 01:36:05.979180 update_engine[1555]: I1101 01:36:05.979143 1555 omaha_request_params.cc:62] Current group set to lts Nov 1 01:36:05.979474 update_engine[1555]: I1101 01:36:05.979440 1555 update_attempter.cc:499] Already updated boot flags. Skipping. Nov 1 01:36:05.979474 update_engine[1555]: I1101 01:36:05.979461 1555 update_attempter.cc:643] Scheduling an action processor start. Nov 1 01:36:05.979766 update_engine[1555]: I1101 01:36:05.979496 1555 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 1 01:36:05.979766 update_engine[1555]: I1101 01:36:05.979559 1555 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Nov 1 01:36:05.979766 update_engine[1555]: I1101 01:36:05.979719 1555 omaha_request_action.cc:270] Posting an Omaha request to disabled Nov 1 01:36:05.979766 update_engine[1555]: I1101 01:36:05.979739 1555 omaha_request_action.cc:271] Request: Nov 1 01:36:05.979766 update_engine[1555]: Nov 1 01:36:05.979766 update_engine[1555]: Nov 1 01:36:05.979766 update_engine[1555]: Nov 1 01:36:05.979766 update_engine[1555]: Nov 1 01:36:05.979766 update_engine[1555]: Nov 1 01:36:05.979766 update_engine[1555]: Nov 1 01:36:05.979766 update_engine[1555]: Nov 1 01:36:05.979766 update_engine[1555]: Nov 1 01:36:05.979766 update_engine[1555]: I1101 01:36:05.979756 1555 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 1 01:36:05.980948 locksmithd[1594]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Nov 1 01:36:05.981986 update_engine[1555]: I1101 01:36:05.981957 1555 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 1 01:36:05.982144 update_engine[1555]: E1101 01:36:05.982120 1555 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 1 01:36:05.982272 update_engine[1555]: I1101 01:36:05.982244 1555 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Nov 1 01:36:06.154920 kubelet[2505]: E1101 01:36:06.154860 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-rhjss" podUID="438a7b01-7b7b-439d-a5c9-a6d4d681a41f" Nov 1 01:36:07.155973 kubelet[2505]: E1101 01:36:07.155874 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bff9f9fd4-4vmq2" podUID="593908a5-f718-4b03-b095-540ff204a4bd" Nov 1 01:36:09.250066 systemd[1]: Started sshd@12-139.178.94.15:22-147.75.109.163:60834.service. Nov 1 01:36:09.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.94.15:22-147.75.109.163:60834 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:36:09.277410 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 01:36:09.277499 kernel: audit: type=1130 audit(1761960969.250:1405): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.94.15:22-147.75.109.163:60834 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:36:09.428000 audit[6861]: USER_ACCT pid=6861 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:09.429325 sshd[6861]: Accepted publickey for core from 147.75.109.163 port 60834 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 01:36:09.432126 sshd[6861]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:36:09.436912 systemd-logind[1596]: New session 14 of user core. Nov 1 01:36:09.437838 systemd[1]: Started session-14.scope. Nov 1 01:36:09.518509 sshd[6861]: pam_unix(sshd:session): session closed for user core Nov 1 01:36:09.520098 systemd[1]: sshd@12-139.178.94.15:22-147.75.109.163:60834.service: Deactivated successfully. Nov 1 01:36:09.520557 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 01:36:09.431000 audit[6861]: CRED_ACQ pid=6861 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:09.520947 systemd-logind[1596]: Session 14 logged out. Waiting for processes to exit. Nov 1 01:36:09.521324 systemd-logind[1596]: Removed session 14. Nov 1 01:36:09.611190 kernel: audit: type=1101 audit(1761960969.428:1406): pid=6861 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:09.611278 kernel: audit: type=1103 audit(1761960969.431:1407): pid=6861 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:09.611299 kernel: audit: type=1006 audit(1761960969.431:1408): pid=6861 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Nov 1 01:36:09.431000 audit[6861]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc9b3b55d0 a2=3 a3=0 items=0 ppid=1 pid=6861 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:36:09.761943 kernel: audit: type=1300 audit(1761960969.431:1408): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc9b3b55d0 a2=3 a3=0 items=0 ppid=1 pid=6861 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:36:09.762035 kernel: audit: type=1327 audit(1761960969.431:1408): proctitle=737368643A20636F7265205B707269765D Nov 1 01:36:09.431000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 01:36:09.441000 audit[6861]: USER_START pid=6861 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:09.887210 kernel: audit: type=1105 audit(1761960969.441:1409): pid=6861 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:09.887292 kernel: audit: type=1103 audit(1761960969.442:1410): pid=6863 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:09.442000 audit[6863]: CRED_ACQ pid=6863 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:09.976675 kernel: audit: type=1106 audit(1761960969.518:1411): pid=6861 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:09.518000 audit[6861]: USER_END pid=6861 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:09.518000 audit[6861]: CRED_DISP pid=6861 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:10.161661 kernel: audit: type=1104 audit(1761960969.518:1412): pid=6861 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:09.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.94.15:22-147.75.109.163:60834 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:36:11.154317 kubelet[2505]: E1101 01:36:11.154293 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6xhxh" podUID="9aac0066-097a-4582-8ce8-a3a1ddb41b3d" Nov 1 01:36:12.154224 kubelet[2505]: E1101 01:36:12.154200 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-lqwd9" podUID="cb08aa02-32db-4371-b5cc-c9a5a7fd22c8" Nov 1 01:36:14.529917 systemd[1]: Started sshd@13-139.178.94.15:22-147.75.109.163:50098.service. Nov 1 01:36:14.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.94.15:22-147.75.109.163:50098 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:36:14.531416 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 01:36:14.531485 kernel: audit: type=1130 audit(1761960974.529:1414): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.94.15:22-147.75.109.163:50098 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:36:14.673000 audit[6893]: USER_ACCT pid=6893 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:14.673997 sshd[6893]: Accepted publickey for core from 147.75.109.163 port 50098 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 01:36:14.675738 sshd[6893]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:36:14.678041 systemd-logind[1596]: New session 15 of user core. Nov 1 01:36:14.678586 systemd[1]: Started session-15.scope. Nov 1 01:36:14.757205 sshd[6893]: pam_unix(sshd:session): session closed for user core Nov 1 01:36:14.758974 systemd[1]: sshd@13-139.178.94.15:22-147.75.109.163:50098.service: Deactivated successfully. Nov 1 01:36:14.759355 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 01:36:14.759746 systemd-logind[1596]: Session 15 logged out. Waiting for processes to exit. Nov 1 01:36:14.761888 systemd[1]: Started sshd@14-139.178.94.15:22-147.75.109.163:50106.service. Nov 1 01:36:14.762292 systemd-logind[1596]: Removed session 15. Nov 1 01:36:14.675000 audit[6893]: CRED_ACQ pid=6893 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:14.855968 kernel: audit: type=1101 audit(1761960974.673:1415): pid=6893 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:14.856065 kernel: audit: type=1103 audit(1761960974.675:1416): pid=6893 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:14.856089 kernel: audit: type=1006 audit(1761960974.675:1417): pid=6893 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Nov 1 01:36:14.675000 audit[6893]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeba9b09e0 a2=3 a3=0 items=0 ppid=1 pid=6893 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:36:14.943777 sshd[6919]: Accepted publickey for core from 147.75.109.163 port 50106 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 01:36:14.944630 sshd[6919]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:36:14.946945 systemd-logind[1596]: New session 16 of user core. Nov 1 01:36:14.947420 systemd[1]: Started session-16.scope. Nov 1 01:36:15.006665 kernel: audit: type=1300 audit(1761960974.675:1417): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeba9b09e0 a2=3 a3=0 items=0 ppid=1 pid=6893 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:36:15.006741 kernel: audit: type=1327 audit(1761960974.675:1417): proctitle=737368643A20636F7265205B707269765D Nov 1 01:36:14.675000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 01:36:14.680000 audit[6893]: USER_START pid=6893 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:15.040309 sshd[6919]: pam_unix(sshd:session): session closed for user core Nov 1 01:36:15.042201 systemd[1]: sshd@14-139.178.94.15:22-147.75.109.163:50106.service: Deactivated successfully. Nov 1 01:36:15.042748 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 01:36:15.043123 systemd-logind[1596]: Session 16 logged out. Waiting for processes to exit. Nov 1 01:36:15.043797 systemd[1]: Started sshd@15-139.178.94.15:22-147.75.109.163:50112.service. Nov 1 01:36:15.044185 systemd-logind[1596]: Removed session 16. Nov 1 01:36:15.131780 kernel: audit: type=1105 audit(1761960974.680:1418): pid=6893 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:15.131839 kernel: audit: type=1103 audit(1761960974.681:1419): pid=6895 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:14.681000 audit[6895]: CRED_ACQ pid=6895 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:15.221138 kernel: audit: type=1106 audit(1761960974.757:1420): pid=6893 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:14.757000 audit[6893]: USER_END pid=6893 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:15.250279 sshd[6942]: Accepted publickey for core from 147.75.109.163 port 50112 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 01:36:15.251678 sshd[6942]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:36:15.254062 systemd-logind[1596]: New session 17 of user core. Nov 1 01:36:15.254576 systemd[1]: Started session-17.scope. Nov 1 01:36:14.757000 audit[6893]: CRED_DISP pid=6893 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:15.331791 sshd[6942]: pam_unix(sshd:session): session closed for user core Nov 1 01:36:15.333071 systemd[1]: sshd@15-139.178.94.15:22-147.75.109.163:50112.service: Deactivated successfully. Nov 1 01:36:15.333517 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 01:36:15.333902 systemd-logind[1596]: Session 17 logged out. Waiting for processes to exit. Nov 1 01:36:15.334341 systemd-logind[1596]: Removed session 17. Nov 1 01:36:15.406350 kernel: audit: type=1104 audit(1761960974.757:1421): pid=6893 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:14.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.94.15:22-147.75.109.163:50098 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:36:14.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-139.178.94.15:22-147.75.109.163:50106 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:36:14.943000 audit[6919]: USER_ACCT pid=6919 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:14.943000 audit[6919]: CRED_ACQ pid=6919 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:14.943000 audit[6919]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdc502cb10 a2=3 a3=0 items=0 ppid=1 pid=6919 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:36:14.943000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 01:36:14.949000 audit[6919]: USER_START pid=6919 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:14.949000 audit[6921]: CRED_ACQ pid=6921 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:15.040000 audit[6919]: USER_END pid=6919 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:15.040000 audit[6919]: CRED_DISP pid=6919 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:15.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-139.178.94.15:22-147.75.109.163:50106 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:36:15.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-139.178.94.15:22-147.75.109.163:50112 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:36:15.249000 audit[6942]: USER_ACCT pid=6942 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:15.251000 audit[6942]: CRED_ACQ pid=6942 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:15.251000 audit[6942]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffff7309f40 a2=3 a3=0 items=0 ppid=1 pid=6942 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:36:15.251000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 01:36:15.256000 audit[6942]: USER_START pid=6942 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:15.256000 audit[6944]: CRED_ACQ pid=6944 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:15.331000 audit[6942]: USER_END pid=6942 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:15.331000 audit[6942]: CRED_DISP pid=6942 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:15.332000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-139.178.94.15:22-147.75.109.163:50112 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:36:15.873000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sda9" ino=520989 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:36:15.873000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0018551d0 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:36:15.873000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:36:15.873000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:36:15.873000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c003580480 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:36:15.873000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:36:15.903000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sda9" ino=520989 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:36:15.903000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=71 a1=c0006b4f30 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:36:15.903000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:36:15.903000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="sda9" ino=520989 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:36:15.903000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=71 a1=c0125a4e10 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:36:15.903000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:36:15.903000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:36:15.903000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="sda9" ino=520985 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:36:15.903000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=71 a1=c00f569bc0 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:36:15.903000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:36:15.903000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=72 a1=c0125a4e70 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:36:15.903000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:36:15.903000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="sda9" ino=520991 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:36:15.903000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=71 a1=c00f2a2060 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:36:15.903000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:36:15.903000 audit[2293]: AVC avc: denied { watch } for pid=2293 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:36:15.903000 audit[2293]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=71 a1=c011714000 a2=fc6 a3=0 items=0 ppid=2158 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:svirt_lxc_net_t:s0:c525,c1019 key=(null) Nov 1 01:36:15.903000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E39342E3135002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Nov 1 01:36:15.976014 update_engine[1555]: I1101 01:36:15.975960 1555 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 1 01:36:15.976217 update_engine[1555]: I1101 01:36:15.976097 1555 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 1 01:36:15.976217 update_engine[1555]: E1101 01:36:15.976147 1555 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 1 01:36:15.976217 update_engine[1555]: I1101 01:36:15.976188 1555 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Nov 1 01:36:16.155561 kubelet[2505]: E1101 01:36:16.155429 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649c6d6f48-6pq8q" podUID="2743a542-7119-47a8-937d-fec5c85bdcf2" Nov 1 01:36:18.155683 kubelet[2505]: E1101 01:36:18.155586 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bff9f9fd4-4vmq2" podUID="593908a5-f718-4b03-b095-540ff204a4bd" Nov 1 01:36:18.157234 kubelet[2505]: E1101 01:36:18.156458 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9wz7k" podUID="79df0ba2-6e86-422c-8f93-652dfb942b69" Nov 1 01:36:18.697000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:36:18.697000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0024c9860 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:36:18.697000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:36:18.697000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:36:18.697000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:36:18.697000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c00258a8a0 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:36:18.697000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c003536260 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:36:18.697000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:36:18.697000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:36:18.697000 audit[2324]: AVC avc: denied { watch } for pid=2324 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="sda9" ino=520983 scontext=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Nov 1 01:36:18.697000 audit[2324]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=c a1=c003325420 a2=fc6 a3=0 items=0 ppid=2183 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:svirt_lxc_net_t:s0:c601,c925 key=(null) Nov 1 01:36:18.697000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Nov 1 01:36:20.157349 kubelet[2505]: E1101 01:36:20.157229 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-rhjss" podUID="438a7b01-7b7b-439d-a5c9-a6d4d681a41f" Nov 1 01:36:20.343855 systemd[1]: Started sshd@16-139.178.94.15:22-147.75.109.163:35114.service. Nov 1 01:36:20.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-139.178.94.15:22-147.75.109.163:35114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:36:20.386674 kernel: kauditd_printk_skb: 59 callbacks suppressed Nov 1 01:36:20.386792 kernel: audit: type=1130 audit(1761960980.343:1453): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-139.178.94.15:22-147.75.109.163:35114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:36:20.503000 audit[6975]: USER_ACCT pid=6975 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:20.504366 sshd[6975]: Accepted publickey for core from 147.75.109.163 port 35114 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 01:36:20.505751 sshd[6975]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:36:20.508094 systemd-logind[1596]: New session 18 of user core. Nov 1 01:36:20.508600 systemd[1]: Started session-18.scope. Nov 1 01:36:20.586840 sshd[6975]: pam_unix(sshd:session): session closed for user core Nov 1 01:36:20.588172 systemd[1]: sshd@16-139.178.94.15:22-147.75.109.163:35114.service: Deactivated successfully. Nov 1 01:36:20.588610 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 01:36:20.588942 systemd-logind[1596]: Session 18 logged out. Waiting for processes to exit. Nov 1 01:36:20.589365 systemd-logind[1596]: Removed session 18. Nov 1 01:36:20.505000 audit[6975]: CRED_ACQ pid=6975 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:20.686289 kernel: audit: type=1101 audit(1761960980.503:1454): pid=6975 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:20.686385 kernel: audit: type=1103 audit(1761960980.505:1455): pid=6975 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:20.686424 kernel: audit: type=1006 audit(1761960980.505:1456): pid=6975 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Nov 1 01:36:20.505000 audit[6975]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd34990830 a2=3 a3=0 items=0 ppid=1 pid=6975 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:36:20.837126 kernel: audit: type=1300 audit(1761960980.505:1456): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd34990830 a2=3 a3=0 items=0 ppid=1 pid=6975 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:36:20.837208 kernel: audit: type=1327 audit(1761960980.505:1456): proctitle=737368643A20636F7265205B707269765D Nov 1 01:36:20.505000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 01:36:20.510000 audit[6975]: USER_START pid=6975 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:20.868407 kernel: audit: type=1105 audit(1761960980.510:1457): pid=6975 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:20.511000 audit[6977]: CRED_ACQ pid=6977 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:21.051681 kernel: audit: type=1103 audit(1761960980.511:1458): pid=6977 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:21.051745 kernel: audit: type=1106 audit(1761960980.587:1459): pid=6975 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:20.587000 audit[6975]: USER_END pid=6975 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:21.147310 kernel: audit: type=1104 audit(1761960980.587:1460): pid=6975 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:20.587000 audit[6975]: CRED_DISP pid=6975 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:20.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-139.178.94.15:22-147.75.109.163:35114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:36:22.154084 kubelet[2505]: E1101 01:36:22.154056 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6xhxh" podUID="9aac0066-097a-4582-8ce8-a3a1ddb41b3d" Nov 1 01:36:23.156270 kubelet[2505]: E1101 01:36:23.156131 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-lqwd9" podUID="cb08aa02-32db-4371-b5cc-c9a5a7fd22c8" Nov 1 01:36:25.589777 systemd[1]: Started sshd@17-139.178.94.15:22-147.75.109.163:35118.service. Nov 1 01:36:25.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-139.178.94.15:22-147.75.109.163:35118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:36:25.616707 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 01:36:25.616820 kernel: audit: type=1130 audit(1761960985.589:1462): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-139.178.94.15:22-147.75.109.163:35118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:36:25.733000 audit[7000]: USER_ACCT pid=7000 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:25.733982 sshd[7000]: Accepted publickey for core from 147.75.109.163 port 35118 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 01:36:25.739855 sshd[7000]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:36:25.743001 systemd-logind[1596]: New session 19 of user core. Nov 1 01:36:25.744148 systemd[1]: Started session-19.scope. Nov 1 01:36:25.739000 audit[7000]: CRED_ACQ pid=7000 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:25.825961 sshd[7000]: pam_unix(sshd:session): session closed for user core Nov 1 01:36:25.827418 systemd[1]: sshd@17-139.178.94.15:22-147.75.109.163:35118.service: Deactivated successfully. Nov 1 01:36:25.828120 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 01:36:25.828561 systemd-logind[1596]: Session 19 logged out. Waiting for processes to exit. Nov 1 01:36:25.829019 systemd-logind[1596]: Removed session 19. Nov 1 01:36:25.915961 kernel: audit: type=1101 audit(1761960985.733:1463): pid=7000 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:25.916052 kernel: audit: type=1103 audit(1761960985.739:1464): pid=7000 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:25.916069 kernel: audit: type=1006 audit(1761960985.739:1465): pid=7000 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Nov 1 01:36:25.974541 update_engine[1555]: I1101 01:36:25.974496 1555 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 1 01:36:25.739000 audit[7000]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe495f1e0 a2=3 a3=0 items=0 ppid=1 pid=7000 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:36:25.974827 update_engine[1555]: I1101 01:36:25.974629 1555 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 1 01:36:25.974827 update_engine[1555]: E1101 01:36:25.974678 1555 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 1 01:36:25.974827 update_engine[1555]: I1101 01:36:25.974715 1555 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Nov 1 01:36:26.066674 kernel: audit: type=1300 audit(1761960985.739:1465): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe495f1e0 a2=3 a3=0 items=0 ppid=1 pid=7000 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:36:26.066717 kernel: audit: type=1327 audit(1761960985.739:1465): proctitle=737368643A20636F7265205B707269765D Nov 1 01:36:25.739000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 01:36:25.746000 audit[7000]: USER_START pid=7000 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:26.191758 kernel: audit: type=1105 audit(1761960985.746:1466): pid=7000 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:26.191820 kernel: audit: type=1103 audit(1761960985.747:1467): pid=7002 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:25.747000 audit[7002]: CRED_ACQ pid=7002 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:25.826000 audit[7000]: USER_END pid=7000 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:26.376867 kernel: audit: type=1106 audit(1761960985.826:1468): pid=7000 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:26.376935 kernel: audit: type=1104 audit(1761960985.826:1469): pid=7000 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:25.826000 audit[7000]: CRED_DISP pid=7000 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:25.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-139.178.94.15:22-147.75.109.163:35118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:36:29.155017 kubelet[2505]: E1101 01:36:29.154931 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bff9f9fd4-4vmq2" podUID="593908a5-f718-4b03-b095-540ff204a4bd" Nov 1 01:36:30.155415 kubelet[2505]: E1101 01:36:30.155351 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649c6d6f48-6pq8q" podUID="2743a542-7119-47a8-937d-fec5c85bdcf2" Nov 1 01:36:30.831041 systemd[1]: Started sshd@18-139.178.94.15:22-147.75.109.163:53694.service. Nov 1 01:36:30.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-139.178.94.15:22-147.75.109.163:53694 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:36:30.862843 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 01:36:30.862950 kernel: audit: type=1130 audit(1761960990.830:1471): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-139.178.94.15:22-147.75.109.163:53694 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:36:30.980000 audit[7027]: USER_ACCT pid=7027 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:30.981300 sshd[7027]: Accepted publickey for core from 147.75.109.163 port 53694 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 01:36:30.982796 sshd[7027]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:36:30.985295 systemd-logind[1596]: New session 20 of user core. Nov 1 01:36:30.985939 systemd[1]: Started session-20.scope. Nov 1 01:36:31.062985 sshd[7027]: pam_unix(sshd:session): session closed for user core Nov 1 01:36:31.064445 systemd[1]: sshd@18-139.178.94.15:22-147.75.109.163:53694.service: Deactivated successfully. Nov 1 01:36:31.064887 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 01:36:31.065222 systemd-logind[1596]: Session 20 logged out. Waiting for processes to exit. Nov 1 01:36:31.065580 systemd-logind[1596]: Removed session 20. Nov 1 01:36:30.982000 audit[7027]: CRED_ACQ pid=7027 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:31.154210 kubelet[2505]: E1101 01:36:31.154156 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-rhjss" podUID="438a7b01-7b7b-439d-a5c9-a6d4d681a41f" Nov 1 01:36:31.163148 kernel: audit: type=1101 audit(1761960990.980:1472): pid=7027 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:31.163189 kernel: audit: type=1103 audit(1761960990.982:1473): pid=7027 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:31.163205 kernel: audit: type=1006 audit(1761960990.982:1474): pid=7027 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Nov 1 01:36:30.982000 audit[7027]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd76a5ae80 a2=3 a3=0 items=0 ppid=1 pid=7027 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:36:31.313881 kernel: audit: type=1300 audit(1761960990.982:1474): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd76a5ae80 a2=3 a3=0 items=0 ppid=1 pid=7027 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:36:31.313936 kernel: audit: type=1327 audit(1761960990.982:1474): proctitle=737368643A20636F7265205B707269765D Nov 1 01:36:30.982000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 01:36:30.987000 audit[7027]: USER_START pid=7027 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:31.438910 kernel: audit: type=1105 audit(1761960990.987:1475): pid=7027 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:31.438985 kernel: audit: type=1103 audit(1761960990.988:1476): pid=7029 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:30.988000 audit[7029]: CRED_ACQ pid=7029 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:31.528208 kernel: audit: type=1106 audit(1761960991.063:1477): pid=7027 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:31.063000 audit[7027]: USER_END pid=7027 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:31.623782 kernel: audit: type=1104 audit(1761960991.063:1478): pid=7027 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:31.063000 audit[7027]: CRED_DISP pid=7027 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:31.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-139.178.94.15:22-147.75.109.163:53694 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:36:32.156233 kubelet[2505]: E1101 01:36:32.156041 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9wz7k" podUID="79df0ba2-6e86-422c-8f93-652dfb942b69" Nov 1 01:36:34.155557 kubelet[2505]: E1101 01:36:34.155387 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6xhxh" podUID="9aac0066-097a-4582-8ce8-a3a1ddb41b3d" Nov 1 01:36:35.979624 update_engine[1555]: I1101 01:36:35.979512 1555 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 1 01:36:35.980488 update_engine[1555]: I1101 01:36:35.980015 1555 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 1 01:36:35.980488 update_engine[1555]: E1101 01:36:35.980221 1555 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 1 01:36:35.980488 update_engine[1555]: I1101 01:36:35.980386 1555 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Nov 1 01:36:35.980488 update_engine[1555]: I1101 01:36:35.980432 1555 omaha_request_action.cc:621] Omaha request response: Nov 1 01:36:35.980916 update_engine[1555]: E1101 01:36:35.980585 1555 omaha_request_action.cc:640] Omaha request network transfer failed. Nov 1 01:36:35.980916 update_engine[1555]: I1101 01:36:35.980614 1555 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Nov 1 01:36:35.980916 update_engine[1555]: I1101 01:36:35.980624 1555 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 1 01:36:35.980916 update_engine[1555]: I1101 01:36:35.980633 1555 update_attempter.cc:306] Processing Done. Nov 1 01:36:35.980916 update_engine[1555]: E1101 01:36:35.980659 1555 update_attempter.cc:619] Update failed. Nov 1 01:36:35.980916 update_engine[1555]: I1101 01:36:35.980669 1555 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Nov 1 01:36:35.980916 update_engine[1555]: I1101 01:36:35.980677 1555 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Nov 1 01:36:35.980916 update_engine[1555]: I1101 01:36:35.980687 1555 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Nov 1 01:36:35.980916 update_engine[1555]: I1101 01:36:35.980841 1555 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 1 01:36:35.980916 update_engine[1555]: I1101 01:36:35.980895 1555 omaha_request_action.cc:270] Posting an Omaha request to disabled Nov 1 01:36:35.980916 update_engine[1555]: I1101 01:36:35.980906 1555 omaha_request_action.cc:271] Request: Nov 1 01:36:35.980916 update_engine[1555]: Nov 1 01:36:35.980916 update_engine[1555]: Nov 1 01:36:35.980916 update_engine[1555]: Nov 1 01:36:35.980916 update_engine[1555]: Nov 1 01:36:35.980916 update_engine[1555]: Nov 1 01:36:35.980916 update_engine[1555]: Nov 1 01:36:35.980916 update_engine[1555]: I1101 01:36:35.980917 1555 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 1 01:36:35.982883 update_engine[1555]: I1101 01:36:35.981256 1555 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 1 01:36:35.982883 update_engine[1555]: E1101 01:36:35.981448 1555 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 1 01:36:35.982883 update_engine[1555]: I1101 01:36:35.981585 1555 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Nov 1 01:36:35.982883 update_engine[1555]: I1101 01:36:35.981600 1555 omaha_request_action.cc:621] Omaha request response: Nov 1 01:36:35.982883 update_engine[1555]: I1101 01:36:35.981611 1555 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 1 01:36:35.982883 update_engine[1555]: I1101 01:36:35.981621 1555 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 1 01:36:35.982883 update_engine[1555]: I1101 01:36:35.981628 1555 update_attempter.cc:306] Processing Done. Nov 1 01:36:35.982883 update_engine[1555]: I1101 01:36:35.981637 1555 update_attempter.cc:310] Error event sent. Nov 1 01:36:35.982883 update_engine[1555]: I1101 01:36:35.981657 1555 update_check_scheduler.cc:74] Next update check in 48m40s Nov 1 01:36:35.983844 locksmithd[1594]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Nov 1 01:36:35.983844 locksmithd[1594]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Nov 1 01:36:36.067570 systemd[1]: Started sshd@19-139.178.94.15:22-147.75.109.163:53702.service. Nov 1 01:36:36.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-139.178.94.15:22-147.75.109.163:53702 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:36:36.094004 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 01:36:36.094071 kernel: audit: type=1130 audit(1761960996.067:1480): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-139.178.94.15:22-147.75.109.163:53702 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:36:36.212000 audit[7086]: USER_ACCT pid=7086 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:36.213610 sshd[7086]: Accepted publickey for core from 147.75.109.163 port 53702 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 01:36:36.214779 sshd[7086]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:36:36.217291 systemd-logind[1596]: New session 21 of user core. Nov 1 01:36:36.218009 systemd[1]: Started session-21.scope. Nov 1 01:36:36.299011 sshd[7086]: pam_unix(sshd:session): session closed for user core Nov 1 01:36:36.300796 systemd[1]: sshd@19-139.178.94.15:22-147.75.109.163:53702.service: Deactivated successfully. Nov 1 01:36:36.301149 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 01:36:36.301451 systemd-logind[1596]: Session 21 logged out. Waiting for processes to exit. Nov 1 01:36:36.302046 systemd[1]: Started sshd@20-139.178.94.15:22-147.75.109.163:53704.service. Nov 1 01:36:36.302416 systemd-logind[1596]: Removed session 21. Nov 1 01:36:36.214000 audit[7086]: CRED_ACQ pid=7086 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:36.395371 kernel: audit: type=1101 audit(1761960996.212:1481): pid=7086 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:36.395457 kernel: audit: type=1103 audit(1761960996.214:1482): pid=7086 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:36.395479 kernel: audit: type=1006 audit(1761960996.214:1483): pid=7086 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Nov 1 01:36:36.454656 kernel: audit: type=1300 audit(1761960996.214:1483): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff297a45b0 a2=3 a3=0 items=0 ppid=1 pid=7086 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:36:36.214000 audit[7086]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff297a45b0 a2=3 a3=0 items=0 ppid=1 pid=7086 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:36:36.214000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 01:36:36.577140 kernel: audit: type=1327 audit(1761960996.214:1483): proctitle=737368643A20636F7265205B707269765D Nov 1 01:36:36.577213 kernel: audit: type=1105 audit(1761960996.220:1484): pid=7086 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:36.220000 audit[7086]: USER_START pid=7086 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:36.220000 audit[7088]: CRED_ACQ pid=7088 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:36.761136 kernel: audit: type=1103 audit(1761960996.220:1485): pid=7088 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:36.299000 audit[7086]: USER_END pid=7086 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:36.769770 sshd[7110]: Accepted publickey for core from 147.75.109.163 port 53704 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 01:36:36.856972 kernel: audit: type=1106 audit(1761960996.299:1486): pid=7086 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:36.857075 kernel: audit: type=1104 audit(1761960996.299:1487): pid=7086 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:36.299000 audit[7086]: CRED_DISP pid=7086 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:36.857443 sshd[7110]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:36:36.859522 systemd-logind[1596]: New session 22 of user core. Nov 1 01:36:36.860069 systemd[1]: Started session-22.scope. Nov 1 01:36:36.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-139.178.94.15:22-147.75.109.163:53702 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:36:36.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-139.178.94.15:22-147.75.109.163:53704 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:36:36.769000 audit[7110]: USER_ACCT pid=7110 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:36.856000 audit[7110]: CRED_ACQ pid=7110 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:36.856000 audit[7110]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffee58e0f0 a2=3 a3=0 items=0 ppid=1 pid=7110 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:36:36.856000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 01:36:36.862000 audit[7110]: USER_START pid=7110 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:36.862000 audit[7112]: CRED_ACQ pid=7112 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:37.040025 sshd[7110]: pam_unix(sshd:session): session closed for user core Nov 1 01:36:37.040000 audit[7110]: USER_END pid=7110 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:37.040000 audit[7110]: CRED_DISP pid=7110 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:37.042778 systemd[1]: sshd@20-139.178.94.15:22-147.75.109.163:53704.service: Deactivated successfully. Nov 1 01:36:37.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-139.178.94.15:22-147.75.109.163:53704 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:36:37.043367 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 01:36:37.043994 systemd-logind[1596]: Session 22 logged out. Waiting for processes to exit. Nov 1 01:36:37.044990 systemd[1]: Started sshd@21-139.178.94.15:22-147.75.109.163:53706.service. Nov 1 01:36:37.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-139.178.94.15:22-147.75.109.163:53706 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:36:37.045736 systemd-logind[1596]: Removed session 22. Nov 1 01:36:37.104000 audit[7134]: USER_ACCT pid=7134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:37.104928 sshd[7134]: Accepted publickey for core from 147.75.109.163 port 53706 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 01:36:37.105000 audit[7134]: CRED_ACQ pid=7134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:37.105000 audit[7134]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffee1f3870 a2=3 a3=0 items=0 ppid=1 pid=7134 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:36:37.105000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 01:36:37.106274 sshd[7134]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:36:37.110444 systemd-logind[1596]: New session 23 of user core. Nov 1 01:36:37.111730 systemd[1]: Started session-23.scope. Nov 1 01:36:37.115000 audit[7134]: USER_START pid=7134 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:37.117000 audit[7138]: CRED_ACQ pid=7138 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:37.155561 kubelet[2505]: E1101 01:36:37.155457 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-lqwd9" podUID="cb08aa02-32db-4371-b5cc-c9a5a7fd22c8" Nov 1 01:36:37.760000 audit[7162]: NETFILTER_CFG table=filter:122 family=2 entries=26 op=nft_register_rule pid=7162 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:36:37.760000 audit[7162]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffcdac98670 a2=0 a3=7ffcdac9865c items=0 ppid=2709 pid=7162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:36:37.760000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:36:37.769301 sshd[7134]: pam_unix(sshd:session): session closed for user core Nov 1 01:36:37.769000 audit[7134]: USER_END pid=7134 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:37.769000 audit[7134]: CRED_DISP pid=7134 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:37.771886 systemd[1]: sshd@21-139.178.94.15:22-147.75.109.163:53706.service: Deactivated successfully. Nov 1 01:36:37.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-139.178.94.15:22-147.75.109.163:53706 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:36:37.772473 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 01:36:37.772986 systemd-logind[1596]: Session 23 logged out. Waiting for processes to exit. Nov 1 01:36:37.773000 audit[7162]: NETFILTER_CFG table=nat:123 family=2 entries=20 op=nft_register_rule pid=7162 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:36:37.773000 audit[7162]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffcdac98670 a2=0 a3=0 items=0 ppid=2709 pid=7162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:36:37.773000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:36:37.773956 systemd[1]: Started sshd@22-139.178.94.15:22-147.75.109.163:53722.service. Nov 1 01:36:37.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-139.178.94.15:22-147.75.109.163:53722 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:36:37.774647 systemd-logind[1596]: Removed session 23. Nov 1 01:36:37.820000 audit[7165]: USER_ACCT pid=7165 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:37.821345 sshd[7165]: Accepted publickey for core from 147.75.109.163 port 53722 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 01:36:37.821000 audit[7165]: CRED_ACQ pid=7165 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:37.821000 audit[7165]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffce9420ab0 a2=3 a3=0 items=0 ppid=1 pid=7165 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:36:37.821000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 01:36:37.822382 sshd[7165]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:36:37.825307 systemd-logind[1596]: New session 24 of user core. Nov 1 01:36:37.825985 systemd[1]: Started session-24.scope. Nov 1 01:36:37.828000 audit[7165]: USER_START pid=7165 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:37.829000 audit[7168]: CRED_ACQ pid=7168 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:38.028595 sshd[7165]: pam_unix(sshd:session): session closed for user core Nov 1 01:36:38.029000 audit[7165]: USER_END pid=7165 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:38.029000 audit[7165]: CRED_DISP pid=7165 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:38.031809 systemd[1]: sshd@22-139.178.94.15:22-147.75.109.163:53722.service: Deactivated successfully. Nov 1 01:36:38.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-139.178.94.15:22-147.75.109.163:53722 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:36:38.032440 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 01:36:38.033001 systemd-logind[1596]: Session 24 logged out. Waiting for processes to exit. Nov 1 01:36:38.034215 systemd[1]: Started sshd@23-139.178.94.15:22-147.75.109.163:53728.service. Nov 1 01:36:38.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-139.178.94.15:22-147.75.109.163:53728 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:36:38.034962 systemd-logind[1596]: Removed session 24. Nov 1 01:36:38.085701 sshd[7189]: Accepted publickey for core from 147.75.109.163 port 53728 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 01:36:38.085000 audit[7189]: USER_ACCT pid=7189 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:38.086000 audit[7189]: CRED_ACQ pid=7189 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:38.086000 audit[7189]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff6f14d990 a2=3 a3=0 items=0 ppid=1 pid=7189 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:36:38.086000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 01:36:38.086644 sshd[7189]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:36:38.088848 systemd-logind[1596]: New session 25 of user core. Nov 1 01:36:38.089418 systemd[1]: Started session-25.scope. Nov 1 01:36:38.090000 audit[7189]: USER_START pid=7189 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:38.091000 audit[7193]: CRED_ACQ pid=7193 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:38.165594 sshd[7189]: pam_unix(sshd:session): session closed for user core Nov 1 01:36:38.165000 audit[7189]: USER_END pid=7189 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:38.165000 audit[7189]: CRED_DISP pid=7189 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:38.167049 systemd[1]: sshd@23-139.178.94.15:22-147.75.109.163:53728.service: Deactivated successfully. Nov 1 01:36:38.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-139.178.94.15:22-147.75.109.163:53728 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:36:38.167553 systemd[1]: session-25.scope: Deactivated successfully. Nov 1 01:36:38.167930 systemd-logind[1596]: Session 25 logged out. Waiting for processes to exit. Nov 1 01:36:38.168317 systemd-logind[1596]: Removed session 25. Nov 1 01:36:38.802000 audit[7218]: NETFILTER_CFG table=filter:124 family=2 entries=38 op=nft_register_rule pid=7218 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:36:38.802000 audit[7218]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc48f97690 a2=0 a3=7ffc48f9767c items=0 ppid=2709 pid=7218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:36:38.802000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:36:38.818000 audit[7218]: NETFILTER_CFG table=nat:125 family=2 entries=20 op=nft_register_rule pid=7218 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:36:38.818000 audit[7218]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc48f97690 a2=0 a3=0 items=0 ppid=2709 pid=7218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:36:38.818000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:36:42.155691 kubelet[2505]: E1101 01:36:42.155593 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bff9f9fd4-4vmq2" podUID="593908a5-f718-4b03-b095-540ff204a4bd" Nov 1 01:36:42.155691 kubelet[2505]: E1101 01:36:42.155624 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-rhjss" podUID="438a7b01-7b7b-439d-a5c9-a6d4d681a41f" Nov 1 01:36:42.352000 audit[7222]: NETFILTER_CFG table=filter:126 family=2 entries=26 op=nft_register_rule pid=7222 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:36:42.379998 kernel: kauditd_printk_skb: 57 callbacks suppressed Nov 1 01:36:42.380032 kernel: audit: type=1325 audit(1761961002.352:1529): table=filter:126 family=2 entries=26 op=nft_register_rule pid=7222 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:36:42.352000 audit[7222]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdaec1c2e0 a2=0 a3=7ffdaec1c2cc items=0 ppid=2709 pid=7222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:36:42.439431 kernel: audit: type=1300 audit(1761961002.352:1529): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdaec1c2e0 a2=0 a3=7ffdaec1c2cc items=0 ppid=2709 pid=7222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:36:42.352000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:36:42.591788 kernel: audit: type=1327 audit(1761961002.352:1529): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:36:42.595000 audit[7222]: NETFILTER_CFG table=nat:127 family=2 entries=104 op=nft_register_chain pid=7222 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:36:42.595000 audit[7222]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffdaec1c2e0 a2=0 a3=7ffdaec1c2cc items=0 ppid=2709 pid=7222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:36:42.752034 kernel: audit: type=1325 audit(1761961002.595:1530): table=nat:127 family=2 entries=104 op=nft_register_chain pid=7222 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:36:42.752120 kernel: audit: type=1300 audit(1761961002.595:1530): arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffdaec1c2e0 a2=0 a3=7ffdaec1c2cc items=0 ppid=2709 pid=7222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:36:42.752135 kernel: audit: type=1327 audit(1761961002.595:1530): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:36:42.595000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:36:43.154596 kubelet[2505]: E1101 01:36:43.154551 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9wz7k" podUID="79df0ba2-6e86-422c-8f93-652dfb942b69" Nov 1 01:36:43.169828 systemd[1]: Started sshd@24-139.178.94.15:22-147.75.109.163:56268.service. Nov 1 01:36:43.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-139.178.94.15:22-147.75.109.163:56268 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:36:43.260407 kernel: audit: type=1130 audit(1761961003.169:1531): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-139.178.94.15:22-147.75.109.163:56268 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:36:43.288000 audit[7224]: USER_ACCT pid=7224 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:43.289079 sshd[7224]: Accepted publickey for core from 147.75.109.163 port 56268 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 01:36:43.291755 sshd[7224]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:36:43.293861 systemd-logind[1596]: New session 26 of user core. Nov 1 01:36:43.294528 systemd[1]: Started session-26.scope. Nov 1 01:36:43.372447 sshd[7224]: pam_unix(sshd:session): session closed for user core Nov 1 01:36:43.373924 systemd[1]: sshd@24-139.178.94.15:22-147.75.109.163:56268.service: Deactivated successfully. Nov 1 01:36:43.374367 systemd[1]: session-26.scope: Deactivated successfully. Nov 1 01:36:43.374748 systemd-logind[1596]: Session 26 logged out. Waiting for processes to exit. Nov 1 01:36:43.375242 systemd-logind[1596]: Removed session 26. Nov 1 01:36:43.291000 audit[7224]: CRED_ACQ pid=7224 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:43.473114 kernel: audit: type=1101 audit(1761961003.288:1532): pid=7224 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:43.473180 kernel: audit: type=1103 audit(1761961003.291:1533): pid=7224 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:43.473205 kernel: audit: type=1006 audit(1761961003.291:1534): pid=7224 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Nov 1 01:36:43.291000 audit[7224]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc14b354c0 a2=3 a3=0 items=0 ppid=1 pid=7224 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:36:43.291000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 01:36:43.296000 audit[7224]: USER_START pid=7224 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:43.297000 audit[7226]: CRED_ACQ pid=7226 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:43.372000 audit[7224]: USER_END pid=7224 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:43.372000 audit[7224]: CRED_DISP pid=7224 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:43.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-139.178.94.15:22-147.75.109.163:56268 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:36:45.155207 kubelet[2505]: E1101 01:36:45.155122 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-649c6d6f48-6pq8q" podUID="2743a542-7119-47a8-937d-fec5c85bdcf2" Nov 1 01:36:46.154214 kubelet[2505]: E1101 01:36:46.154187 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6xhxh" podUID="9aac0066-097a-4582-8ce8-a3a1ddb41b3d" Nov 1 01:36:48.384334 systemd[1]: Started sshd@25-139.178.94.15:22-147.75.109.163:56274.service. Nov 1 01:36:48.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-139.178.94.15:22-147.75.109.163:56274 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:36:48.426410 kernel: kauditd_printk_skb: 7 callbacks suppressed Nov 1 01:36:48.426503 kernel: audit: type=1130 audit(1761961008.384:1540): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-139.178.94.15:22-147.75.109.163:56274 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:36:48.543000 audit[7249]: USER_ACCT pid=7249 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:48.544258 sshd[7249]: Accepted publickey for core from 147.75.109.163 port 56274 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 01:36:48.545746 sshd[7249]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:36:48.548127 systemd-logind[1596]: New session 27 of user core. Nov 1 01:36:48.548655 systemd[1]: Started session-27.scope. Nov 1 01:36:48.626024 sshd[7249]: pam_unix(sshd:session): session closed for user core Nov 1 01:36:48.627507 systemd[1]: sshd@25-139.178.94.15:22-147.75.109.163:56274.service: Deactivated successfully. Nov 1 01:36:48.628001 systemd[1]: session-27.scope: Deactivated successfully. Nov 1 01:36:48.628294 systemd-logind[1596]: Session 27 logged out. Waiting for processes to exit. Nov 1 01:36:48.628731 systemd-logind[1596]: Removed session 27. Nov 1 01:36:48.545000 audit[7249]: CRED_ACQ pid=7249 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:48.727211 kernel: audit: type=1101 audit(1761961008.543:1541): pid=7249 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:48.727265 kernel: audit: type=1103 audit(1761961008.545:1542): pid=7249 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:48.727289 kernel: audit: type=1006 audit(1761961008.545:1543): pid=7249 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Nov 1 01:36:48.786357 kernel: audit: type=1300 audit(1761961008.545:1543): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc90f6a5e0 a2=3 a3=0 items=0 ppid=1 pid=7249 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:36:48.545000 audit[7249]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc90f6a5e0 a2=3 a3=0 items=0 ppid=1 pid=7249 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:36:48.879145 kernel: audit: type=1327 audit(1761961008.545:1543): proctitle=737368643A20636F7265205B707269765D Nov 1 01:36:48.545000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 01:36:48.550000 audit[7249]: USER_START pid=7249 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:49.005391 kernel: audit: type=1105 audit(1761961008.550:1544): pid=7249 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:49.005462 kernel: audit: type=1103 audit(1761961008.551:1545): pid=7251 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:49.094790 kernel: audit: type=1106 audit(1761961008.626:1546): pid=7249 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:48.551000 audit[7251]: CRED_ACQ pid=7251 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:48.626000 audit[7249]: USER_END pid=7249 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:48.626000 audit[7249]: CRED_DISP pid=7249 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:48.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-139.178.94.15:22-147.75.109.163:56274 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:36:49.282616 kernel: audit: type=1104 audit(1761961008.626:1547): pid=7249 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:52.154998 kubelet[2505]: E1101 01:36:52.154908 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-lqwd9" podUID="cb08aa02-32db-4371-b5cc-c9a5a7fd22c8" Nov 1 01:36:53.633151 systemd[1]: Started sshd@26-139.178.94.15:22-147.75.109.163:55840.service. Nov 1 01:36:53.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-139.178.94.15:22-147.75.109.163:55840 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:36:53.665079 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 01:36:53.665170 kernel: audit: type=1130 audit(1761961013.633:1549): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-139.178.94.15:22-147.75.109.163:55840 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:36:53.784000 audit[7273]: USER_ACCT pid=7273 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:53.785349 sshd[7273]: Accepted publickey for core from 147.75.109.163 port 55840 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 01:36:53.786692 sshd[7273]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:36:53.789080 systemd-logind[1596]: New session 28 of user core. Nov 1 01:36:53.789559 systemd[1]: Started session-28.scope. Nov 1 01:36:53.786000 audit[7273]: CRED_ACQ pid=7273 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:53.878482 kernel: audit: type=1101 audit(1761961013.784:1550): pid=7273 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:53.878516 kernel: audit: type=1103 audit(1761961013.786:1551): pid=7273 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:53.952570 sshd[7273]: pam_unix(sshd:session): session closed for user core Nov 1 01:36:53.954093 systemd[1]: sshd@26-139.178.94.15:22-147.75.109.163:55840.service: Deactivated successfully. Nov 1 01:36:53.954723 systemd[1]: session-28.scope: Deactivated successfully. Nov 1 01:36:53.955194 systemd-logind[1596]: Session 28 logged out. Waiting for processes to exit. Nov 1 01:36:53.955677 systemd-logind[1596]: Removed session 28. Nov 1 01:36:54.026807 kernel: audit: type=1006 audit(1761961013.786:1552): pid=7273 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Nov 1 01:36:54.026895 kernel: audit: type=1300 audit(1761961013.786:1552): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdf2809b10 a2=3 a3=0 items=0 ppid=1 pid=7273 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:36:53.786000 audit[7273]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdf2809b10 a2=3 a3=0 items=0 ppid=1 pid=7273 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:36:53.786000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 01:36:54.149602 kernel: audit: type=1327 audit(1761961013.786:1552): proctitle=737368643A20636F7265205B707269765D Nov 1 01:36:54.149655 kernel: audit: type=1105 audit(1761961013.791:1553): pid=7273 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:53.791000 audit[7273]: USER_START pid=7273 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:54.153547 kubelet[2505]: E1101 01:36:54.153509 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b6cfc8885-rhjss" podUID="438a7b01-7b7b-439d-a5c9-a6d4d681a41f" Nov 1 01:36:53.791000 audit[7275]: CRED_ACQ pid=7275 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:54.333596 kernel: audit: type=1103 audit(1761961013.791:1554): pid=7275 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:54.333686 kernel: audit: type=1106 audit(1761961013.952:1555): pid=7273 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:53.952000 audit[7273]: USER_END pid=7273 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:53.952000 audit[7273]: CRED_DISP pid=7273 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:54.518712 kernel: audit: type=1104 audit(1761961013.952:1556): pid=7273 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:36:53.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-139.178.94.15:22-147.75.109.163:55840 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:36:55.157155 kubelet[2505]: E1101 01:36:55.157035 2505 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9wz7k" podUID="79df0ba2-6e86-422c-8f93-652dfb942b69"